42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2011 CFR
2011-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2014 CFR
2014-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2012 CFR
2012-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
DOE Office of Scientific and Technical Information (OSTI.GOV)
HU TA
2009-10-26
Assess the steady-state flammability level at normal and off-normal ventilation conditions. The hydrogen generation rate was calculated for 177 tanks using the rate equation model. Flammability calculations based on hydrogen, ammonia, and methane were performed for 177 tanks for various scenarios.
NASA Astrophysics Data System (ADS)
Takemine, S.; Rikimaru, A.; Takahashi, K.
The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed
Estimating evaporative vapor generation from automobiles based on parking activities.
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S
2015-07-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.
Yuan, Y; Duan, J; Popple, R; Brezovich, I
2012-06-01
To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.
Metabolically Derived Human Ventilation Rates: A Revised ...
EPA announced the availability of the final report, Metabolically Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates. This report provides a revised approach for calculating an individual's ventilation rate directly from their oxygen consumption rate. This approach will be used to update the ventilation rate information in the Exposure Factors Handbook, which serve as a resources for exposure assessors for calculating inhalation and other exposures. In this report, EPA presents a revised approach in which ventilation rate is calculated directly from an individual's oxygen consumption rate.
Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80
NASA Astrophysics Data System (ADS)
Pruet, Jason; Fuller, George M.
2003-11-01
We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.
Kusano, Maggie; Caldwell, Curtis B
2014-07-01
A primary goal of nuclear medicine facility design is to keep public and worker radiation doses As Low As Reasonably Achievable (ALARA). To estimate dose and shielding requirements, one needs to know both the dose equivalent rate constants for soft tissue and barrier transmission factors (TFs) for all radionuclides of interest. Dose equivalent rate constants are most commonly calculated using published air kerma or exposure rate constants, while transmission factors are most commonly calculated using published tenth-value layers (TVLs). Values can be calculated more accurately using the radionuclide's photon emission spectrum and the physical properties of lead, concrete, and/or tissue at these energies. These calculations may be non-trivial due to the polyenergetic nature of the radionuclides used in nuclear medicine. In this paper, the effects of dose equivalent rate constant and transmission factor on nuclear medicine dose and shielding calculations are investigated, and new values based on up-to-date nuclear data and thresholds specific to nuclear medicine are proposed. To facilitate practical use, transmission curves were fitted to the three-parameter Archer equation. Finally, the results of this work were applied to the design of a sample nuclear medicine facility and compared to doses calculated using common methods to investigate the effects of these values on dose estimates and shielding decisions. Dose equivalent rate constants generally agreed well with those derived from the literature with the exception of those from NCRP 124. Depending on the situation, Archer fit TFs could be significantly more accurate than TVL-based TFs. These results were reflected in the sample shielding problem, with unshielded dose estimates agreeing well, with the exception of those based on NCRP 124, and Archer fit TFs providing a more accurate alternative to TVL TFs and a simpler alternative to full spectral-based calculations. The data provided by this paper should assist in improving the accuracy and tractability of dose and shielding calculations for nuclear medicine facility design.
Research on Signature Verification Method Based on Discrete Fréchet Distance
NASA Astrophysics Data System (ADS)
Fang, J. L.; Wu, W.
2018-05-01
This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.
NASA Technical Reports Server (NTRS)
Gokoglu, S. A.; Chen, B. K.; Rosner, D. E.
1984-01-01
The computer program based on multicomponent chemically frozen boundary layer (CFBL) theory for calculating vapor and/or small particle deposition rates is documented. A specific application to perimter-averaged Na2SO4 deposition rate calculations on a cylindrical collector is demonstrated. The manual includes a typical program input and output for users.
Implementation of Online Promethee Method for Poor Family Change Rate Calculation
NASA Astrophysics Data System (ADS)
Aji, Dhady Lukito; Suryono; Widodo, Catur Edi
2018-02-01
This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE) .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.
SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to calculate acid and neutral hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition states of a ...
Head rice rate measurement based on concave point matching
Yao, Yuan; Wu, Wei; Yang, Tianle; Liu, Tao; Chen, Wen; Chen, Chen; Li, Rui; Zhou, Tong; Sun, Chengming; Zhou, Yue; Li, Xinlu
2017-01-01
Head rice rate is an important factor affecting rice quality. In this study, an inflection point detection-based technology was applied to measure the head rice rate by combining a vibrator and a conveyor belt for bulk grain image acquisition. The edge center mode proportion method (ECMP) was applied for concave points matching in which concave matching and separation was performed with collaborative constraint conditions followed by rice length calculation with a minimum enclosing rectangle (MER) to identify the head rice. Finally, the head rice rate was calculated using the sum area of head rice to the overall coverage of rice. Results showed that bulk grain image acquisition can be realized with test equipment, and the accuracy rate of separation of both indica rice and japonica rice exceeded 95%. An increase in the number of rice did not significantly affect ECMP and MER. High accuracy can be ensured with MER to calculate head rice rate by narrowing down its relative error between real values less than 3%. The test results show that the method is reliable as a reference for head rice rate calculation studies. PMID:28128315
40 CFR 74.22 - Actual SO 2 emissions rate.
Code of Federal Regulations, 2014 CFR
2014-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
40 CFR 74.22 - Actual SO2 emissions rate.
Code of Federal Regulations, 2012 CFR
2012-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
40 CFR 74.22 - Actual SO2 emissions rate.
Code of Federal Regulations, 2011 CFR
2011-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
40 CFR 74.22 - Actual SO 2 emissions rate.
Code of Federal Regulations, 2013 CFR
2013-07-01
... calculations under this section based on data submitted under § 74.20 for the following calendar year: (1) For combustion sources that commenced operation prior to January 1, 1985, the calendar year for calculating the... January 1, 1985, the calendar year for calculating the actual SO2 emissions rate shall be the first year...
Cool-down flow-rate limits imposed by thermal stresses in LNG pipelines
NASA Astrophysics Data System (ADS)
Novak, J. K.; Edeskuty, F. J.; Bartlit, J. R.
Warm cryogenic pipelines are usually cooled to operating temperature by a small, steady flow of the liquid cryogen. If this flow rate is too high or too low, undesirable stresses will be produced. Low flow-rate limits based on avoidance of stratified two-phase flow were calculated for pipelines cooled with liquid hydrogen or nitrogen. High flow-rate limits for stainless steel and aluminum pipelines cooled by liquid hydrogen or nitrogen were determined by calculating thermal stress in thick components vs flow rate and then selecting some reasonable stress limits. The present work extends these calculations to pipelines made of AISI 304 stainless steel, 6061 aluminum, or ASTM A420 9% nickel steel cooled by liquid methane or a typical natural gas. Results indicate that aluminum and 9% nickel steel components can tolerate very high cool-down flow rates, based on not exceeding the material yield strength.
Code of Federal Regulations, 2010 CFR
2010-10-01
... patient utilization calendar year as identified from Medicare claims is calendar year 2007. (4) Wage index... calculating the per-treatment base rate for 2011 are as follows: (1) Per patient utilization in CY 2007, 2008..., 2008 or 2009 to determine the year with the lowest per patient utilization. (2) Update of per treatment...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the Postal Service files its notice of rate adjustment and dividing the sum by 12 (Recent Average... values immediately preceding the Recent Average and dividing the sum by 12 (Base Average). Finally, the full year limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1...
Chanani, Sheila; Wacksman, Jeremy; Deshmukh, Devika; Pantvaidya, Shanti; Fernandez, Armida; Jayaraman, Anuja
2016-12-01
Acute malnutrition is linked to child mortality and morbidity. Community-Based Management of Acute Malnutrition (CMAM) programs can be instrumental in large-scale detection and treatment of undernutrition. The World Health Organization (WHO) 2006 weight-for-height/length tables are diagnostic tools available to screen for acute malnutrition. Frontline workers (FWs) in a CMAM program in Dharavi, Mumbai, were using CommCare, a mobile application, for monitoring and case management of children in combination with the paper-based WHO simplified tables. A strategy was undertaken to digitize the WHO tables into the CommCare application. To measure differences in diagnostic accuracy in community-based screening for acute malnutrition, by FWs, using a mobile-based solution. Twenty-seven FWs initially used the paper-based tables and then switched to an updated mobile application that included a nutritional grade calculator. Human error rates specifically associated with grade classification were calculated by comparison of the grade assigned by the FW to the grade each child should have received based on the same WHO tables. Cohen kappa coefficient, sensitivity and specificity rates were also calculated and compared for paper-based grade assignments and calculator grade assignments. Comparing FWs (N = 14) who completed at least 40 screenings without and 40 with the calculator, the error rates were 5.5% and 0.7%, respectively (p < .0001). Interrater reliability (κ) increased to an almost perfect level (>.90), from .79 to .97, after switching to the mobile calculator. Sensitivity and specificity also improved significantly. The mobile calculator significantly reduces an important component of human error in using the WHO tables to assess acute malnutrition at the community level. © The Author(s) 2016.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
EPA announced the availability of the final report, Metabolically Derived Human Ventilation Rates: A Revised Approach Based Upon Oxygen Consumption Rates. This report provides a revised approach for calculating an individual's ventilation rate directly from their oxygen c...
RadShield: semiautomated shielding design using a floor plan driven graphical user interface
Wu, Dee H.; Yang, Kai; Rutel, Isaac B.
2016-01-01
The purpose of this study was to introduce and describe the development of RadShield, a Java‐based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air‐kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry‐based approach and a manual approach. A series of geometry‐based equations were derived giving the maximum air‐kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)‐certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air‐kerma rate was compared against the geometry‐based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry‐based approach and RadShield's approach in finding the magnitude and location of the maximum air‐kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheterization labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air‐kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X‐ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air‐kerma rate or barrier thickness. PACS number(s): 87.55.N, 87.52.‐g, 87.59.Bh, 87.57.‐s PMID:27685128
RadShield: semiautomated shielding design using a floor plan driven graphical user interface.
DeLorenzo, Matthew C; Wu, Dee H; Yang, Kai; Rutel, Isaac B
2016-09-08
The purpose of this study was to introduce and describe the development of RadShield, a Java-based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air-kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry-based approach and a manual approach. A series of geometry-based equations were derived giv-ing the maximum air-kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)-certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air-kerma rate was compared against the geometry-based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry-based approach and RadShield's approach in finding the magnitude and location of the maximum air-kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheteriza-tion labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air-kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X-ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air-kerma rate or barrier thickness. © 2016 The Authors.
Theoretical rate constants of super-exchange hole transfer and thermally induced hopping in DNA.
Shimazaki, Tomomi; Asai, Yoshihiro; Yamashita, Koichi
2005-01-27
Recently, the electronic properties of DNA have been extensively studied, because its conductivity is important not only to the study of fundamental biological problems, but also in the development of molecular-sized electronics and biosensors. We have studied theoretically the reorganization energies, the activation energies, the electronic coupling matrix elements, and the rate constants of hole transfer in B-form double-helix DNA in water. To accommodate the effects of DNA nuclear motions, a subset of reaction coordinates for hole transfer was extracted from classical molecular dynamics (MD) trajectories of DNA in water and then used for ab initio quantum chemical calculations of electron coupling constants based on the generalized Mulliken-Hush model. A molecular mechanics (MM) method was used to determine the nuclear Franck-Condon factor. The rate constants for two types of mechanisms of hole transfer-the thermally induced hopping (TIH) and the super-exchange mechanisms-were determined based on Marcus theory. We found that the calculated matrix elements are strongly dependent on the conformations of the nucleobase pairs of hole-transferable DNA and extend over a wide range of values for the "rise" base-step parameter but cluster around a particular value for the "twist" parameter. The calculated activation energies are in good agreement with experimental results. Whereas the rate constant for the TIH mechanism is not dependent on the number of A-T nucleobase pairs that act as a bridge, the rate constant for the super-exchange process rapidly decreases when the length of the bridge increases. These characteristic trends in the calculated rate constants effectively reproduce those in the experimental data of Giese et al. [Nature 2001, 412, 318]. The calculated rate constants were also compared with the experimental results of Lewis et al. [Nature 2000, 406, 51].
Pan, Wenxiao; Daily, Michael; Baker, Nathan A.
2015-05-07
Background: The calculation of diffusion-controlled ligand binding rates is important for understanding enzyme mechanisms as well as designing enzyme inhibitors. Methods: We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) BC, is considered on the reactive boundaries. This new BC treatment allows for the analysis of enzymes with “imperfect” reaction rates. Results: The numerical method is first verified in simple systems and thenmore » applied to the calculation of ligand binding to a mouse acetylcholinesterase (mAChE) monomer. Rates for inhibitor binding to mAChE are calculated at various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Conclusions: Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less
A theory for the fracture of thin plates subjected to bending and twisting moments
NASA Technical Reports Server (NTRS)
Hui, C. Y.; Zehnder, Alan T.
1993-01-01
Stress fields near the tip of a through crack in an elastic plate under bending and twisting moments are reviewed assuming both Kirchhoff and Reissner plate theories. The crack tip displacement and rotation fields based on the Reissner theory are calculated. These results are used to calculate the J-integral (energy release rate) for both Kirchhoff and Reissner plate theories. Invoking Simmonds and Duva's (1981) result that the value of the J-integral based on either theory is the same for thin plates, a universal relationship between the Kirchhoff theory stress intensity factors and the Reissner theory stress intensity factors is obtained for thin plates. Calculation of Kirchhoff theory stress intensity factors from finite elements based on energy release rate is illustrated. It is proposed that, for thin plates, fracture toughness and crack growth rates be correlated with the Kirchhoff theory stress intensity factors.
On determining dose rate constants spectroscopically.
Rodriguez, M; Rogers, D W O
2013-01-01
To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of (125)I and (103)Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated (125)I and (103)Pd sources. Spectra generated by 14 (125)I and 6 (103)Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm(3) voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the (125)I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for (103)Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ≤0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. The ratio of the intensity of the 31 keV line relative to that of the main peak in (125)I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The (103)Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the (125)I and (103)Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.
A new leakage measurement method for damaged seal material
NASA Astrophysics Data System (ADS)
Wang, Shen; Yao, Xue Feng; Yang, Heng; Yuan, Li; Dong, Yi Feng
2018-07-01
In this paper, a new leakage measurement method based on the temperature field and temperature gradient field is proposed for detecting the leakage location and measuring the leakage rate in damaged seal material. First, a heat transfer leakage model is established, which can calculate the leakage rate based on the temperature gradient field near the damaged zone. Second, a finite element model of an infinite plate with a damaged zone is built to calculate the leakage rate, which fits the simulated leakage rate well. Finally, specimens in a tubular rubber seal with different damage shapes are used to conduct the leakage experiment, validating the correctness of this new measurement principle for the leakage rate and the leakage position. The results indicate the feasibility of the leakage measurement method for damaged seal material based on the temperature gradient field from infrared thermography.
Feldspar dissolution rates in the Topopah Spring Tuff, Yucca Mountain, Nevada
Bryan, C.R.; Helean, K.B.; Marshall, B.D.; Brady, P.V.
2009-01-01
Two different field-based methods are used here to calculate feldspar dissolution rates in the Topopah Spring Tuff, the host rock for the proposed nuclear waste repository at Yucca Mountain, Nevada. The center of the tuff is a high silica rhyolite, consisting largely of alkali feldspar (???60 wt%) and quartz polymorphs (???35 wt%) that formed by devitrification of rhyolitic glass as the tuff cooled. First, the abundance of secondary aluminosilicates is used to estimate the cumulative amount of feldspar dissolution over the history of the tuff, and an ambient dissolution rate is calculated by using the estimated thermal history. Second, the feldspar dissolution rate is calculated by using measured Sr isotope compositions for the pore water and rock. Pore waters display systematic changes in Sr isotopic composition with depth that are caused by feldspar dissolution. The range in dissolution rates determined from secondary mineral abundances varies from 10-16 to 10-17 mol s-1 kg tuff-1 with the largest uncertainty being the effect of the early thermal history of the tuff. Dissolution rates based on pore water Sr isotopic data were calculated by treating percolation flux parametrically, and vary from 10-15 to 10-16 mol s-1 kg tuff-1 for percolation fluxes of 15 mm a-1 and 1 mm a-1, respectively. Reconciling the rates from the two methods requires that percolation fluxes at the sampled locations be a few mm a-1 or less. The calculated feldspar dissolution rates are low relative to other measured field-based feldspar dissolution rates, possibly due to the age (12.8 Ma) of the unsaturated system at Yucca Mountain; because oxidizing and organic-poor conditions limit biological activity; and/or because elevated silica concentrations in the pore waters (???50 mg L-1) may inhibit feldspar dissolution. ?? 2009 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-04-01
... calculation must be made using the rate of pay that the employee would have received but for the period of uniformed service. (b)(1) Where the rate of pay the employee would have received is not reasonably certain, such as where compensation is based on commissions earned, the average rate of compensation during the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... calculation must be made using the rate of pay that the employee would have received but for the period of uniformed service. (b)(1) Where the rate of pay the employee would have received is not reasonably certain, such as where compensation is based on commissions earned, the average rate of compensation during the...
Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin
2016-02-01
Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1 h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1 h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1 h -1 to 26.8 ml kg -1 h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, D.A.
1988-02-01
Thermal maturity can be calculated with time-temperature indices (TTI) based on the Arrhenius equation using kinetics applicable to a range of Types II and III kerogens. These TTIs are compared with TTI calculations based on the Lopatin method and are related theoretically (and empirically via vitrinite reflectance) to the petroleum-generation window. The TTIs for both methods are expressed mathematically as integrals of temperature combined with variable linear heating rates for selected temperature intervals. Heating rates control the thermal-maturation trends of buried sediments. Relative to Arrhenius TTIs, Lopatin TTIs tend to underestimate thermal maturity at high heating rates and overestimate itmore » as low heating rates. Complex burial histories applicable to a range of tectonic environments illustrate the different exploration decisions that might be made on the basis of independent results of these two thermal-maturation models. 15 figures, 8 tables.« less
White, A.F.
2002-01-01
Chemical weathering gradients are defined by the changes in the measured elemental concentrations in solids and pore waters with depth in soils and regoliths. An increase in the mineral weathering rate increases the change in these concentrations with depth while increases in the weathering velocity decrease the change. The solid-state weathering velocity is the rate at which the weathering front propagates through the regolith and the solute weathering velocity is equivalent to the rate of pore water infiltration. These relationships provide a unifying approach to calculating both solid and solute weathering rates from the respective ratios of the weathering velocities and gradients. Contemporary weathering rates based on solute residence times can be directly compared to long-term past weathering based on changes in regolith composition. Both rates incorporate identical parameters describing mineral abundance, stoichiometry, and surface area. Weathering gradients were used to calculate biotite weathering rates in saprolitic regoliths in the Piedmont of Northern Georgia, USA and in Luquillo Mountains of Puerto Rico. Solid-state weathering gradients for Mg and K at Panola produced reaction rates of 3 to 6 x 10-17 mol m-2 s-1 for biotite. Faster weathering rates of 1.8 to 3.6 ?? 10-16 mol m-2 s-1 are calculated based on Mg and K pore water gradients in the Rio Icacos regolith. The relative rates are in agreement with a warmer and wetter tropical climate in Puerto Rico. Both natural rates are three to six orders of magnitude slower than reported experimental rates of biotite weathering. ?? 2002 Elsevier Science B.V. All rights reserved.
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Steinbach, Sarah M L; Sturgess, Christopher P; Dunning, Mark D; Neiger, Reto
2015-06-01
Assessment of renal function by means of plasma clearance of a suitable marker has become standard procedure for estimation of glomerular filtration rate (GFR). Sinistrin, a polyfructan solely cleared by the kidney, is often used for this purpose. Pharmacokinetic modeling using adequate software is necessary to calculate disappearance rate and half-life of sinistrin. The purpose of this study was to describe the use of a Microsoft excel based add-in program to calculate plasma sinistrin clearance, as well as additional pharmacokinetic parameters such as transfer rates (k), half-life (t1/2) and volume of distribution (Vss) for sinistrin in dogs with varying degrees of renal function. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1992-01-01
This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.
Pocket calculator for local fire-danger ratings
Richard J. Barney; William C. Fischer
1967-01-01
In 1964, Stockstad and Barney published tables that provided conversion factors for calculating local fire danger in the Intermountain area according to fuel types, locations, steepness of terrain, aspects, and times of day. These tables were based on the National Fire-Danger Rating System published earlier that year. This system was adopted for operational use in...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Daily, Michael D.; Baker, Nathan A.
2015-12-01
We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. The numerical method is first verified in simple systems and then applied to the calculation of ligand binding to an acetylcholinesterase monomer. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) boundary condition, is considered on the reactive boundaries. This new boundary condition treatment allows for the analysis of enzymes with "imperfect" reaction rates. Rates for inhibitor binding to mAChE are calculated atmore » various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.« less
Preuss, Rebekka; Chenot, Jean-François; Angelow, Aniela
2016-01-01
Objectives: Atrial fibrillation (AF) is a common cardiac arrhythmia with increased risk of thromboembolic stroke. Oral anticoagulation (OAC) reduces stroke risk by up to 68%. The aim of our study was to evaluate quality of care in patients with AF in a primary health care setting with a focus on physician guideline adherence for OAC prescription and heart rate- and rhythm management. In a second step we aimed to compare OAC rates based on primary care data with rates based on claims data. Methods: We included all GP practices in the region Vorpommern-Greifswald, Germany, which were willing to participate (N=29/182, response rate 16%). Claims data was derived from the regional association of statutory health insurance physicians. Patients with a documented AF diagnosis (ICD-10-GM-Code ICD I48.-) from 07/2011-06/2012 were identified using electronic medical records (EMR) and claims data. Stroke and bleeding risk were calculated using the CHA 2 DS 2 -VASc and HAS-BLED scores. We calculated crude treatment rates for OAC, rate and rhythm control medications and adjusted OAC treatment rates based on practice and claims data. Adjusted rates were calculated including the CHA 2 DS 2 -VASc and HAS-BLED scores and individual factors affecting guideline based treatment. Results: We identified 927 patients based on EMR and 1,247 patients based on claims data. The crude total OAC treatment rate was 69% based on EMR and 61% based on claims data. The adjusted OAC treatment rates were 90% for patients based on EMR and 63% based on claims data. 82% of the AF patients received a treatment for rate control and 12% a treatment for rhythm control. The most common reasons for non-prescription of OAC were an increased risk of falling, dementia and increased bleeding risk. Conclusion: Our results suggest that a high rate of AF patients receive a drug therapy according to guidelines. There is a large difference between crude and adjusted OAC treatment rates. This is due to individual contraindications and comorbidities which cannot be documented using ICD coding. Therefore, quality indicators based on crude EMR data or claims data would lead to a systematic underestimation of the quality of care. A possible overtreatment of low-risk patients cannot be ruled out.
Calawerts, William M; Lin, Liyu; Sprott, JC; Jiang, Jack J
2016-01-01
Objective/Hypothesis The purpose of this paper is to introduce rate of divergence as an objective measure to differentiate between the four voice types based on the amount of disorder present in a signal. We hypothesized that rate of divergence would provide an objective measure that can quantify all four voice types. Study Design 150 acoustic voice recordings were randomly selected and analyzed using traditional perturbation, nonlinear, and rate of divergence analysis methods. ty Methods We developed a new parameter, rate of divergence, which uses a modified version of Wolf’s algorithm for calculating Lyapunov exponents of a system. The outcome of this calculation is not a Lyapunov exponent, but rather a description of the divergence of two nearby data points for the next three points in the time series, followed in three time delayed embedding dimensions. This measure was compared to currently existing perturbation and nonlinear dynamic methods of distinguishing between voice signals. Results There was a direct relationship between voice type and rate of divergence. This calculation is especially effective at differentiating between type 3 and type 4 voices (p<0.001), and is equally effective at differentiating type 1, type 2, and type 3 signals as currently existing methods. Conclusion The rate of divergence calculation introduced is an objective measure that can be used to distinguish between all four voice types based on amount of disorder present, leading to quicker and more accurate voice typing as well as an improved understanding of the nonlinear dynamics involved in phonation. PMID:26920858
7 CFR 1421.10 - Loan repayment rates.
Code of Federal Regulations, 2010 CFR
2010-01-01
... determined by the Secretary) that is calculated based on average market prices for the loan commodity during... and announce repayment rates under paragraphs (a)(2) and (a)(3) of this section based upon market prices at appropriate U.S. markets as determined by CCC and these repayment rates may be adjusted to...
7 CFR 1421.10 - Loan repayment rates.
Code of Federal Regulations, 2014 CFR
2014-01-01
... determined by the Secretary) that is calculated based on average market prices for the loan commodity during... and announce repayment rates under paragraphs (a)(2) and (a)(3) of this section based upon market prices at appropriate U.S. markets as determined by CCC and these repayment rates may be adjusted to...
7 CFR 1421.10 - Loan repayment rates.
Code of Federal Regulations, 2011 CFR
2011-01-01
... determined by the Secretary) that is calculated based on average market prices for the loan commodity during... and announce repayment rates under paragraphs (a)(2) and (a)(3) of this section based upon market prices at appropriate U.S. markets as determined by CCC and these repayment rates may be adjusted to...
Methodological choices affect cancer incidence rates: a cohort study.
Brooke, Hannah L; Talbäck, Mats; Feychting, Maria; Ljung, Rickard
2017-01-19
Incidence rates are fundamental to epidemiology, but their magnitude and interpretation depend on methodological choices. We aimed to examine the extent to which the definition of the study population affects cancer incidence rates. All primary cancer diagnoses in Sweden between 1958 and 2010 were identified from the national Cancer Register. Age-standardized and age-specific incidence rates of 29 cancer subtypes between 2000 and 2010 were calculated using four definitions of the study population: persons resident in Sweden 1) based on general population statistics; 2) with no previous subtype-specific cancer diagnosis; 3) with no previous cancer diagnosis except non-melanoma skin cancer; and 4) with no previous cancer diagnosis of any type. We calculated absolute and relative differences between methods. Age-standardized incidence rates calculated using general population statistics ranged from 6% lower (prostate cancer, incidence rate difference: -13.5/100,000 person-years) to 8% higher (breast cancer in women, incidence rate difference: 10.5/100,000 person-years) than incidence rates based on individuals with no previous subtype-specific cancer diagnosis. Age-standardized incidence rates in persons with no previous cancer of any type were up to 10% lower (bladder cancer in women) than rates in those with no previous subtype-specific cancer diagnosis; however, absolute differences were <5/100,000 person-years for all cancer subtypes. For some cancer subtypes incidence rates vary depending on the definition of the study population. For these subtypes, standardized incidence ratios calculated using general population statistics could be misleading. Moreover, etiological arguments should be used to inform methodological choices during study design.
NASA Astrophysics Data System (ADS)
Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy
2017-10-01
The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.
Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices
NASA Astrophysics Data System (ADS)
Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The
2018-01-01
ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.
NASA Astrophysics Data System (ADS)
Alley, K. E.; Scambos, T.; Anderson, R. S.; Rajaram, H.; Pope, A.; Haran, T.
2017-12-01
Strain rates are fundamental measures of ice flow used in a wide variety of glaciological applications including investigations of bed properties, calculations of basal mass balance on ice shelves, application to Glen's flow law, and many other studies. However, despite their extensive application, strain rates are calculated using widely varying methods and length scales, and the calculation details are often not specified. In this study, we compare the results of nominal and logarithmic strain-rate calculations based on a satellite-derived velocity field of the Antarctic ice sheet generated from Landsat 8 satellite data. Our comparison highlights the differences between the two commonly used approaches in the glaciological literature. We evaluate the errors introduced by each code and their impacts on the results. We also demonstrate the importance of choosing and specifying a length scale over which strain-rate calculations are made, which can have large local impacts on other derived quantities such as basal mass balance on ice shelves. We present strain-rate data products calculated using an approximate viscous length-scale with satellite observations of ice velocity for the Antarctic continent. Finally, we explore the applications of comprehensive strain-rate maps to future ice shelf studies, including investigations of ice fracture, calving patterns, and stability analyses.
The effect of rate denominator source on US fatal occupational injury rate estimates.
Richardson, David; Loomis, Dana; Bailer, A John; Bena, James
2004-09-01
The Current Population Survey (CPS) is often used as a source of denominator information for analyses of US fatal occupational injury rates. However, given the relatively small sample size of the CPS, analyses that examine the cross-classification of occupation or industry with demographic or geographic characteristics will often produce highly imprecise rate estimates. The Decennial Census of Population provides an alternative source for rate denominator information. We investigate the comparability of fatal injury rates derived using these two sources of rate denominator information. Information on fatal occupational injuries that occurred between January 1, 1983 and December 31, 1994 was obtained from the National Traumatic Occupational Fatality surveillance system. Annual estimates of employment by occupation, industry, age, and sex were derived from the CPS, and by linear interpolation and extrapolation from the 1980 and 1990 Census of Population. Fatal injury rates derived using these denominator data were compared. Fatal injury rates calculated using Census-based denominator data were within 10% of rates calculated using CPS data for all major occupation groups except farming/forestry/fishing, for which the fatal injury rate calculated using Census-based denominator data was 24.69/100,000 worker-years and the rate calculated using CPS data was 19.97/100,000 worker-years. The choice of denominator data source had minimal influence on estimates of trends over calendar time in the fatal injury rates for most major occupation and industry groups. The Census offers a reasonable source for deriving fatal injury rate denominator data in situations where the CPS does not provide sufficiently precise data, although the Census may underestimate the population-at-risk in some industries as a consequence of seasonal variation in employment.
Deformed shell model study of event rates for WIMP-73Ge scattering
NASA Astrophysics Data System (ADS)
Sahu, R.; Kota, V. K. B.
2017-12-01
The event detection rates for the Weakly Interacting Massive Particles (WIMP) (a dark matter candidate) are calculated with 73Ge as the detector. The calculations are performed within the deformed shell model (DSM) based on Hartree-Fock states. First, the energy levels and magnetic moment for the ground state and two low-lying positive parity states for this nucleus are calculated and compared with experiment. The agreement is quite satisfactory. Then the nuclear wave functions are used to investigate the elastic and inelastic scattering of WIMP from 73Ge; inelastic scattering, especially for the 9/2+ → 5/2+ transition, is studied for the first time. The nuclear structure factors which are independent of supersymmetric model are also calculated as a function of WIMP mass. The event rates are calculated for a given set of nucleonic current parameters. The calculation shows that 73Ge is a good detector for detecting dark matter.
Tohru Mitsunaga; Anthony H. Conner; Charles G. Hill
2002-01-01
The rates (k) of hydroxymethylation of phenol, resorcinol. phloroglucinol, and several methylphenols in diluted 10% dimethylformamide aqueous alkaline solution were calculated based on the consumption of phenols and formaldehyde. The k values of phloroglucinol and resorcinol relative to that of phenol were about 62000 and 1200 times, respectively. The phenols that have...
Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC
NASA Astrophysics Data System (ADS)
Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina
2016-11-01
New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.
van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C
1994-01-05
Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons, Inc.
Cosmogenic Ne-21 Production Rates in H-Chondrites Based on Cl-36 - Ar-36 Ages
NASA Technical Reports Server (NTRS)
Leya, I.; Graf, Th.; Nishiizumi, K.; Guenther, D.; Wieler, R.
2000-01-01
We measured Ne-21 production rates in 14 H-chondrites in good agreement with model calculations. The production rates are based on Ne-21 concentrations measured on bulk samples or the non-magnetic fraction and Cl-36 - Ar-36 ages determined from the metal phase.
Rapidly-formed ferromanganese deposit from the eastern Pacific Hess Deep
Burnett, W.C.; Piper, D.Z.
1977-01-01
A thick ferromanganese deposit encrusting fresh basaltic glass has been dredged from the Hess Deep in the eastern Pacific. Contiguous layers within the Fe-Mn crust have been analysed for uranium-series isotopes and metal contents. The rate of accumulation of the deposit, based on the decline of uranium-unsupported 230Th, is calculated to be approximately 50 mm per 106 yr. Based on hydration-rind dating of the underlying glass and an 'exposure age' calculation, this rate is concluded to be too slow, and an accretion rate on the order of 1 mm per 103 yr is more consistent with our data. ?? 1977 Nature Publishing Group.
Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.
2010-01-01
Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Calawerts, William M; Lin, Liyu; Sprott, J C; Jiang, Jack J
2017-01-01
The purpose of this paper is to introduce the rate of divergence as an objective measure to differentiate between the four voice types based on the amount of disorder present in a signal. We hypothesized that rate of divergence would provide an objective measure that can quantify all four voice types. A total of 150 acoustic voice recordings were randomly selected and analyzed using traditional perturbation, nonlinear, and rate of divergence analysis methods. We developed a new parameter, rate of divergence, which uses a modified version of Wolf's algorithm for calculating Lyapunov exponents of a system. The outcome of this calculation is not a Lyapunov exponent, but rather a description of the divergence of two nearby data points for the next three points in the time series, followed in three time-delayed embedding dimensions. This measure was compared to currently existing perturbation and nonlinear dynamic methods of distinguishing between voice signals. There was a direct relationship between voice type and rate of divergence. This calculation is especially effective at differentiating between type 3 and type 4 voices (P < 0.001) and is equally effective at differentiating type 1, type 2, and type 3 signals as currently existing methods. The rate of divergence calculation introduced is an objective measure that can be used to distinguish between all four voice types based on the amount of disorder present, leading to quicker and more accurate voice typing as well as an improved understanding of the nonlinear dynamics involved in phonation. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Barber, Betsy; Ball, Rhonda
This project description is designed to show how graphing calculators and calculator-based laboratories (CBLs) can be used to explore topics in physics and health sciences. The activities address such topics as respiration, heart rate, and the circulatory system. Teaching notes and calculator instructions are included as are blackline masters. (MM)
NASA Technical Reports Server (NTRS)
Ganguly, Jibamitra
1989-01-01
Results of preliminary calculations of volatile abundances in carbonaceous chondrites are discussed. The method (Ganguly 1982) was refined for the calculation of cooling rate on the basis of cation ordering in orthopyroxenes, and it was applied to the derivation of cooling rates of some stony meteorites. Evaluation of cooling rate is important to the analysis of condensation, accretion, and post-accretionary metamorphic histories of meteorites. The method of orthopyroxene speedometry is widely applicable to meteorites and would be very useful in the understanding of the evolutionary histories of carbonaceous chondrites, especially since the conventional metallographic and fission track methods yield widely different results in many cases. Abstracts are given which summarize the major conclusions of the volatile abundance and cooling rate calculations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-10
... question, including when that rate is zero or de minimis.\\5\\ In this case, there is only one non-selected... calculations for one company. Therefore, the final results differ from the preliminary results. The final... not to calculate an all-others rate using any zero or de minimis margins or any margins based entirely...
FPGA Implementation of Heart Rate Monitoring System.
Panigrahy, D; Rakshit, M; Sahu, P K
2016-03-01
This paper describes a field programmable gate array (FPGA) implementation of a system that calculates the heart rate from Electrocardiogram (ECG) signal. After heart rate calculation, tachycardia, bradycardia or normal heart rate can easily be detected. ECG is a diagnosis tool routinely used to access the electrical activities and muscular function of the heart. Heart rate is calculated by detecting the R peaks from the ECG signal. To provide a portable and the continuous heart rate monitoring system for patients using ECG, needs a dedicated hardware. FPGA provides easy testability, allows faster implementation and verification option for implementing a new design. We have proposed a five-stage based methodology by using basic VHDL blocks like addition, multiplication and data conversion (real to the fixed point and vice-versa). Our proposed heart rate calculation (R-peak detection) method has been validated, using 48 first channel ECG records of the MIT-BIH arrhythmia database. It shows an accuracy of 99.84%, the sensitivity of 99.94% and the positive predictive value of 99.89%. Our proposed method outperforms other well-known methods in case of pathological ECG signals and successfully implemented in FPGA.
NASA Astrophysics Data System (ADS)
Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime
2017-09-01
After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.
On determining dose rate constants spectroscopically
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, M.; Rogers, D. W. O.
2013-01-15
Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of {sup 125}I and {sup 103}Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated {sup 125}I and {sup 103}Pd sources. Methods: Spectra generated by 14 {sup 125}I and 6 {sup 103}Pd seedsmore » were calculated in vacuo at 10 cm from the source in a 2.7 Multiplication-Sign 2.7 Multiplication-Sign 0.05 cm{sup 3} voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the {sup 125}I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for {sup 103}Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were Less-Than-Or-Slanted-Equal-To 0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in {sup 125}I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The {sup 103}Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the {sup 125}I and {sup 103}Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Conclusions: Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.« less
Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan
2009-01-01
The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
39 CFR 3010.21 - Calculation of annual limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... notice of rate adjustment and dividing the sum by 12 (Recent Average). Then, a second simple average CPI... Recent Average and dividing the sum by 12 (Base Average). Finally, the annual limitation is calculated by dividing the Recent Average by the Base Average and subtracting 1 from the quotient. The result is...
Arbib, Zouhayr; de Godos Crespo, Ignacio; Corona, Enrique Lara; Rogalla, Frank
2017-06-01
Microalgae culture in high rate algae ponds (HRAP) is an environmentally friendly technology for wastewater treatment. However, for the implementation of these systems, a better understanding of the oxygenation potential and the influence of climate conditions is required. In this work, the rates of oxygen production, consumption, and exchange with the atmosphere were calculated under varying conditions of solar irradiance and dilution rate during six months of operation in a real scale unit. This analysis allowed determining the biological response of these dynamic systems. The rates of oxygen consumption measured were considerably higher than the values calculated based on the organic loading rate. The response to light intensity in terms of oxygen production in the bioreactor was described with one of the models proposed for microalgae culture in dense concentrations. This model is based on the availability of light inside the culture and the specific response of microalgae to this parameter. The specific response to solar radiation intensity showed a reasonable stability in spite of the fluctuations due to meteorological conditions. The methodology developed is a useful tool for optimization and prediction of the performance of these systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
....171 of this part, into a single per treatment base rate developed from 2007 claims data. The steps to..., or 2009. CMS removes the effects of enrollment and price growth from total expenditures for 2007...
39 CFR 3010.23 - Calculation of percentage change in rates.
Code of Federal Regulations, 2014 CFR
2014-07-01
... adjustment for rates of general applicability. A seasonal or temporary rate shall be identified and treated as a rate cell separate and distinct from the corresponding non-seasonal or permanent rate. (b) For... based on known mail characteristics or historical volume data, as opposed to forecasts of mailer...
A satellite technique for quantitatively mapping rainfall rates over the oceans
NASA Technical Reports Server (NTRS)
Wilheit, T. T.; Roa, M. S. V.; Chang, T. C.; Rodgers, E. B.; Theon, J. S.
1975-01-01
A theoretical model for calculating microwave radiative transfer in raining atmospheres is developed. These calculations are compared with microwave brightness temperatures at a wavelength of 1.55 cm measured on the Nimbus-5 satellite and rain rates derived from WSR-57 meteorological radar measurements. A specially designed ground based verification experiment was also performed wherein upward viewing microwave brightness temperature measurements at wavelengths of 1.55 cm and 0.81 cm were compared with directly measured rain rates.
NASA Astrophysics Data System (ADS)
Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino
2017-04-01
Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.
Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct
Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan
2013-01-01
Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650
INFLUENCES OF RESPONSE RATE AND DISTRIBUTION ON THE CALCULATION OF INTEROBSERVER RELIABILITY SCORES
Rolider, Natalie U.; Iwata, Brian A.; Bullock, Christopher E.
2012-01-01
We examined the effects of several variations in response rate on the calculation of total, interval, exact-agreement, and proportional reliability indices. Trained observers recorded computer-generated data that appeared on a computer screen. In Study 1, target responses occurred at low, moderate, and high rates during separate sessions so that reliability results based on the four calculations could be compared across a range of values. Total reliability was uniformly high, interval reliability was spuriously high for high-rate responding, proportional reliability was somewhat lower for high-rate responding, and exact-agreement reliability was the lowest of the measures, especially for high-rate responding. In Study 2, we examined the separate effects of response rate per se, bursting, and end-of-interval responding. Response rate and bursting had little effect on reliability scores; however, the distribution of some responses at the end of intervals decreased interval reliability somewhat, proportional reliability noticeably, and exact-agreement reliability markedly. PMID:23322930
Ji, Young-Yong; Kim, Chang-Jong; Lim, Kyo-Sun; Lee, Wanno; Chang, Hyon-Sock; Chung, Kun Ho
2017-10-01
To expand the application of dose rate spectroscopy to the environment, the method using an environmental radiation monitor (ERM) based on a 3' × 3' NaI(Tl) detector was used to perform real-time monitoring of the dose rate and radioactivity for detected gamma nuclides in the ground around an ERM. Full-energy absorption peaks in the energy spectrum for dose rate were first identified to calculate the individual dose rates of Bi, Ac, Tl, and K distributed in the ground through interference correction because of the finite energy resolution of the NaI(Tl) detector used in an ERM. The radioactivity of the four natural radionuclides was then calculated from the in situ calibration factor-that is, the dose rate per unit curie-of the used ERM for the geometry of the ground in infinite half-space, which was theoretically estimated by Monte Carlo simulation. By an intercomparison using a portable HPGe and samples taken from the ground around an ERM, this method to calculate the dose rate and radioactivity of four nuclides using an ERM was experimentally verified and finally applied to remotely monitor them in real-time in the area in which the ERM had been installed.
Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation
NASA Technical Reports Server (NTRS)
Ferriday, James G.; Avery, Susan K.
1994-01-01
A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.
Code of Federal Regulations, 2010 CFR
2010-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
Code of Federal Regulations, 2014 CFR
2014-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
Code of Federal Regulations, 2012 CFR
2012-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
Code of Federal Regulations, 2011 CFR
2011-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation
Code of Federal Regulations, 2013 CFR
2013-01-01
... stepped interest rates, and to certain time accounts with a stated maturity greater than one year. A... calculated by the formula shown below. Institutions shall calculate the annual percentage yield based on the... determining the total interest figure to be used in the formula, institutions shall assume that all principal...
12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation
Code of Federal Regulations, 2011 CFR
2011-01-01
... stepped interest rates, and to certain time accounts with a stated maturity greater than one year. A... calculated by the formula shown below. Institutions shall calculate the annual percentage yield based on the... determining the total interest figure to be used in the formula, institutions shall assume that all principal...
12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation
Code of Federal Regulations, 2014 CFR
2014-01-01
... stepped interest rates, and to certain time accounts with a stated maturity greater than one year. A... calculated by the formula shown below. Institutions shall calculate the annual percentage yield based on the... determining the total interest figure to be used in the formula, institutions shall assume that all principal...
Study of Gamow-Teller strength and associated weak-rates on odd-A nuclei in stellar matter
NASA Astrophysics Data System (ADS)
Majid, Muhammad; Nabi, Jameel-Un; Riaz, Muhammad
In a recent study by Cole et al. [A. L. Cole et al., Phys. Rev. C 86 (2012) 015809], it was concluded that quasi-particle random phase approximation (QRPA) calculations show larger deviations and overestimate the total experimental Gamow-Teller (GT) strength. It was also concluded that QRPA calculated electron capture rates exhibit larger deviation than those derived from the measured GT strength distributions. The main purpose of this study is to probe the findings of the Cole et al. paper. This study gives useful information on the performance of QRPA-based nuclear models. As per simulation results, the capturing of electrons that occur on medium heavy isotopes have a significant role in decreasing the ratio of electron-to-baryon content of the stellar interior during the late stages of core evolution. We report the calculation of allowed charge-changing transitions strength for odd-A fp-shell nuclei (45Sc and 55Mn) by employing the deformed pn-QRPA approach. The computed GT transition strength is compared with previous theoretical calculations and measured data. For stellar applications, the corresponding electron capture rates are computed and compared with rates using previously calculated and measured GT values. Our finding shows that our calculated results are in decent accordance with measured data. At higher stellar temperature, our calculated electron capture rates are larger than those calculated by independent particle model (IPM) and shell model. It was further concluded that at low temperature and high density regions, the positron emission weak-rates from 45Sc and 55Mn may be neglected in simulation codes.
The Occurrence Rate of Hot Jupiters
NASA Astrophysics Data System (ADS)
Rampalli, Rayna; Catanzarite, Joseph; Batalha, Natalie M.
2017-01-01
As the first kind of exoplanet to be discovered, hot Jupiters have always been objects of interest. Despite being prevalent in radial velocity and ground-based surveys, they were found to be much rarer based on Kepler observations. These data show a pile-up at radii of 9-22 Rearth and orbital periods of 1-10 days. Computing accurate occurrence rates can lend insight into planet-formation and migration-theories. To get a more accurate look, the idea of reliability was introduced. Each hot Jupiter candidate was assigned a reliability based on its location in the galactic plane and likelihood of being a false positive. Numbers were updated if ground-based follow-up indicated a candidate was indeed a false positive. These reliabilities were introduced into an occurrence rate calculation and yielded about a 12% decrease in occurrence rate for each period bin examined and a 25% decrease across all the bins. To get a better idea of the cause behind the pileup, occurrence rates based on parent stellar metallicity were calculated. As expected from previous work, higher metallicity stars yield higher occurrence rates. Future work includes examining period distributions in both the high metallicity and low metallicity sample for a better understanding and confirmation of the pile-up effect.
Limited Impact of Subglacial Supercooling Freeze-on for Greenland Ice Sheet Stratigraphy
NASA Astrophysics Data System (ADS)
Dow, Christine F.; Karlsson, Nanna B.; Werder, Mauro A.
2018-02-01
Large units of disrupted radiostratigraphy (UDR) are visible in many radio-echo sounding data sets from the Greenland Ice Sheet. This study investigates whether supercooling freeze-on rates at the bed can cause the observed UDR. We use a subglacial hydrology model to calculate both freezing and melting rates at the base of the ice sheet in a distributed sheet and within basal channels. We find that while supercooling freeze-on is a phenomenon that occurs in many areas of the ice sheet, there is no discernible correlation with the occurrence of UDR. The supercooling freeze-on rates are so low that it would require tens of thousands of years with minimal downstream ice motion to form the hundreds of meters of disrupted radiostratigraphy. Overall, the melt rates at the base of the ice sheet greatly overwhelm the freeze-on rates, which has implications for mass balance calculations of Greenland ice.
75 FR 3197 - Summer Food Service Program; 2010 Reimbursement Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-20
..., reimbursement has been based solely on a ``meals times rates'' calculation, without comparison to actual or... public of the annual adjustments to the reimbursement rates for meals served in the Summer Food Service... to the reimbursement rates for meals served in the Summer Food Service Program (SFSP). As required...
Wall, G.R.; Ingleston, H.H.; Litten, S.
2005-01-01
Total mercury (THg) load in rivers is often calculated from a site-specific "rating-curve" based on the relation between THg concentration and river discharge along with a continuous record of river discharge. However, there is no physical explanation as to why river discharge should consistently predict THg or any other suspended analyte. THg loads calculated by the rating-curve method were compared with those calculated by a "continuous surrogate concentration" (CSC) method in which a relation between THg concentration and suspended-sediment concentration (SSC) is constructed; THg loads then can be calculated from the continuous record of SSC and river discharge. The rating-curve and CSC methods, respectively, indicated annual THg loads of 46.4 and 75.1 kg for the Mohawk River, and 52.9 and 33.1 kg for the upper Hudson River. Differences between the results of the two methods are attributed to the inability of the rating-curve method to adequately characterize atypical high flows such as an ice-dam release, or to account for hysteresis, which typically degrades the strength of the relation between stream discharge and concentration of material in suspension. ?? Springer 2005.
Thermally activated switching at long time scales in exchange-coupled magnetic grains
NASA Astrophysics Data System (ADS)
Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.; Fal, T. J.
2015-10-01
Rate coefficients of the Arrhenius-Néel form are calculated for thermally activated magnetic moment reversal for dual layer exchange-coupled composite (ECC) media based on the Langer formalism and are applied to study the sweep rate dependence of M H hysteresis loops as a function of the exchange coupling I between the layers. The individual grains are modeled as two exchange-coupled Stoner-Wohlfarth particles from which the minimum energy paths connecting the minimum energy states are calculated using a variant of the string method and the energy barriers and attempt frequencies calculated as a function of the applied field. The resultant rate equations describing the evolution of an ensemble of noninteracting ECC grains are then integrated numerically in an applied field with constant sweep rate R =-d H /d t and the magnetization calculated as a function of the applied field H . M H hysteresis loops are presented for a range of values I for sweep rates 105Oe /s ≤R ≤1010Oe /s and a figure of merit that quantifies the advantages of ECC media is proposed. M H hysteresis loops are also calculated based on the stochastic Landau-Lifshitz-Gilbert equations for 108Oe /s ≤R ≤1010Oe /s and are shown to be in good agreement with those obtained from the direct integration of rate equations. The results are also used to examine the accuracy of certain approximate models that reduce the complexity associated with the Langer-based formalism and which provide some useful insight into the reversal process and its dependence on the coupling strength and sweep rate. Of particular interest is the clustering of minimum energy states that are separated by relatively low-energy barriers into "metastates." It is shown that while approximating the reversal process in terms of "metastates" results in little loss of accuracy, it can reduce the run time of a kinetic Monte Carlo (KMC) simulation of the magnetic decay of an ensemble of dual layer ECC media by 2 -3 orders of magnitude. The essentially exact results presented in this work for two coupled grains are analogous to the Stoner-Wohlfarth model of a single grain and serve as an important precursor to KMC-based simulation studies on systems of interacting dual layer ECC media.
NASA Astrophysics Data System (ADS)
Vieira, Daniel; Krems, Roman
2017-04-01
Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.
NASA Technical Reports Server (NTRS)
Boughner, R. E.
1985-01-01
Within the atmosphere of the earth, absorption and emission of thermal radiation by the 15-micron CO2 bands are the largest contributors to infrared cooling rates in the stratosphere. Various techniques for calculating cooling rates due to these bands have been described. These techniques can be classified into one of two categories, including 'exact' or line-by-line calculations and other methods. The latter methods are based on broad band emissivity and band absorptance formulations. The present paper has the objective to present comparisons of the considered computational approaches. It was found that the best agreement with the exact line-by-line calculations of Fels and Schwarzkopf (1981) could be obtained by making use of a new Doppler band model which is described in the appendix of the paper.
Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J
1994-12-01
Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.
Radiative decay rate of excitons in square quantum wells: Microscopic modeling and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khramtsov, E. S.; Grigoryev, P. S.; Ignatiev, I. V.
The binding energy and the corresponding wave function of excitons in GaAs-based finite square quantum wells (QWs) are calculated by the direct numerical solution of the three-dimensional Schrödinger equation. The precise results for the lowest exciton state are obtained by the Hamiltonian discretization using the high-order finite-difference scheme. The microscopic calculations are compared with the results obtained by the standard variational approach. The exciton binding energies found by two methods coincide within 0.1 meV for the wide range of QW widths. The radiative decay rate is calculated for QWs of various widths using the exciton wave functions obtained by direct andmore » variational methods. The radiative decay rates are confronted with the experimental data measured for high-quality GaAs/AlGaAs and InGaAs/GaAs QW heterostructures grown by molecular beam epitaxy. The calculated and measured values are in good agreement, though slight differences with earlier calculations of the radiative decay rate are observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.
2007-11-01
Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
NASA Astrophysics Data System (ADS)
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
Chlorine hazard evaluation for the zinc-chlorine electric vehicle battery. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zalosh, R.G.; Bajpai, S.N.; Short, T.P.
1980-04-01
An evaluation of the hazards associated with conceivable accidental chlorine releases from zinc-chlorine electric vehicle batteries is presented. Since commercial batteries are not yet available, this hazard assessment is based both on theoretical chlorine dispersion models and small-scale and large-scale spill tests with chlorine hydrate. Six spill tests involving chlorine hydrate indicate that the danger zone in which chlorine vapor concentrations intermittently exceed 100 ppM extends at least 23 m directly downwind of a spill onto a warm road surface. Chlorine concentration data from the hydrate spill tests compare favorably with calculations based on a quasi-steady area source dispersion modelmore » and empirical estimates of the hydrate decomposition rate. The theoretical dispersion model has been combined with assumed hydrate spill probabilities and current motor vehicle accident statistics in order to project expected chlorine-induced fatality rates. These calculations indicate that expected chlorine fatality rates are several times higher in a city with a warm and calm climate than in a colder and windier city. Calculated chlorine-induced fatality rate projections for various climates are presented as a function of hydrate spill probability in order to illustrate the degree of vehicle/battery crashworthiness required to maintain chlorine-induced fatality rates below current vehicle fatility rates due to fires and asphyxiations.« less
Modeling and calculation of turbulent lifted diffusion flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, J.P.H.; Lamers, A.P.G.G.
1994-01-01
Liftoff heights of turbulent diffusion flames have been modeled using the laminar diffusion flamelet concept of Peters and Williams. The strain rate of the smallest eddies is used as the stretch describing parameter, instead of the more common scalar dissipation rate. The h(U) curve, which is the mean liftoff height as a function of fuel exit velocity can be accurately predicted, while this was impossible with the scalar dissipation rate. Liftoff calculations performed in the flames as well as in the equivalent isothermal jets, using a standard k-[epsilon] turbulence model yield approximately the same correct slope for the h(U) curvemore » while the offset has to be reproduced by choosing an appropriate coefficient in the strain rate model. For the flame calculations a model for the pdf of the fluctuating flame base is proposed. The results are insensitive to its width. The temperature field is qualitatively different from the field calculated by Bradley et al. who used a premixed flamelet model for diffusion flames.« less
[Detection of Heart Rate of Fetal ECG Based on STFT and BSS].
Wang, Xu; Cai, Kun
2016-01-01
Changes in heart rate of fetal is function regulating performance of the circulatory system and the central nervous system, it is significant to detect heart rate of fetus in perinatal fetal. This paper puts forward the fetal heart rate detection method based on short time Fourier transform and blind source separation. First of all, the mixed ECG signal was preprocessed, and then the wavelet transform technique was used to separate the fetal ECG signal with noise from mixed ECG signal, after that, the short-time Fourier transform and the blind separation were carried on it, and then calculated the correlation coefficient of it, Finally, An independent component that it has strongest correlation with the original signal was selected to make FECG peak detection and calculated the fetal instantaneous heart rate. The experimental results show that the method can improve the detection rate of the FECG peak (R), and it has high accuracy in fixing peak(R) location in the case of low signal-noise ratio.
An Examination of the Relationship between Computation, Problem Solving, and Reading
ERIC Educational Resources Information Center
Cormier, Damien C.; Yeo, Seungsoo; Christ, Theodore J.; Offrey, Laura D.; Pratt, Katherine
2016-01-01
The purpose of this study is to evaluate the relationship of mathematics calculation rate (curriculum-based measurement of mathematics; CBM-M), reading rate (curriculum-based measurement of reading; CBM-R), and mathematics application and problem solving skills (mathematics screener) among students at four levels of proficiency on a statewide…
Harris, M Anne
2010-09-15
Epidemiologic research that uses administrative records (rather than registries or clinical surveys) to identify cases for study has been increasingly restricted because of concerns about privacy, making unbiased population-based research less practicable. In their article, Nattinger et al. (Am J Epidemiol. 2010;172(6):637-644) present a method for using administrative data to contact participants that has been well received. However, the methods employed for calculating and reporting response rates require further consideration, particularly the classification of untraceable cases as ineligible. Depending on whether response rates are used to evaluate the potential for bias to influence study results or to evaluate the acceptability of the method of contact, different fractions may be considered. To improve the future study of epidemiologic research methods, a consensus on the calculation and reporting of study response rates should be sought.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
NASA Astrophysics Data System (ADS)
Penjweini, Rozhin; Kim, Michele M.; Ong, Yi Hong; Zhu, Timothy C.
2017-02-01
Although photodynamic therapy (PDT) is an established modality for the treatment of cancer, current dosimetric quantities do not account for the variations in PDT oxygen consumption for different fluence rates (φ). In this study we examine the efficacy of reacted singlet oxygen concentration ([1O2]rx) to predict long-term local control rate (LCR) for Photofrin-mediated PDT. Radiation-induced fibrosarcoma (RIF) tumors in the right shoulders of female C3H mice are treated with different in-air fluences of 225-540 J/cm2 and in-air fluence rate (φair) of 50 and 75 mW/cm2 at 5 mg/kg Photofrin and a drug-light interval of 24 hours using a 1 cm diameter collimated laser beam at 630 nm wavelength. [1O2]rx is calculated by using a macroscopic model based on explicit dosimetry of Photofrin concentration, tissue optical properties, tissue oxygenation and blood flow changes during PDT. The tumor volume of each mouse is tracked for 90 days after PDT and Kaplan-Meier analyses for LCR are performed based on a tumor volume <=100 mm3, for the four dose metrics light fluence, photosensitizer photobleaching rate, PDT dose and [1O2]rx. PDT dose is defined as a temporal integral of photosensitizer concentration and Φ at a 3 mm tumor depth. φ is calculated throughout the treatment volume based on Monte-Carlo simulation and measured tissue optical properties. Our preliminary studies show that [1O2]rx is the best dosimetric quantity that can predict tumor response and correlate with LCR. Moreover, [1O2]rx calculated using the blood flow changes was in agreement with [1O2]rx calculated based on the actual tissue oxygenation.
MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Wu, D; Rutel, I
2015-06-15
Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less
[Development of DRG-reimbursement in hand surgery].
Lotter, O; Stahl, S; Nyszkiewicz, R; Schaller, H-E
2011-02-01
Since the introduction of Diagnosis Related Groups (DRGs) in 2004 in Germany the variables of remuneration changed continuously. Subjectively, reimbursement of DRG in hand surgery has a negative connotation among colleagues. We analyzed the development of reimbursement as well as the length of stay of inpatients over time in Hand Surgery considering various parameters concentrating on trends and correlation with macroeconomic parameters. The Top 10 diagnoses and therapies between 2004 and 2010 in our clinic were grouped and resulting DRGs with further linked data could be obtained. In addition to the Base Rate the Pay Base Rate (the effective Base Rate in a certain hospital considering compensatory payment) and the Z-Bax (the value that was reimbursed by the national health insurance per Base Rate) were used to calculate reimbursement. These were multiplied with the number of cases treated in 2009 obtaining the different total annual remuneration. The lower threshold of length of stay was constant over time, the middle length of stay became shorter in most of the Top 10 diagnoses whereas the upper threshold of length of stay was reduced to half of the time. The Base Rate and the Pay Base Rate increased by the end of the period but were outmatched by the Z-Bax as an indicator for the general level of reimbursement in Germany. Total remuneration between 2004 and 2009 was compared applying the Z-Bax and the Base Rate as well as the Pay Base Rate, respectively. For the latter, a surplus of 244 000 Euros and 311 000 Euros were calculated, respectively. No correlation with the Gross National Product or the Rate of Inflation could be found. The Pay Base Rate as the rate of effective payment in our clinic declined by 7% whereas the consumer price index gained 8,6% resulting in a loss of purchasing power of almost 16% in a 6-year period. © Georg Thieme Verlag KG Stuttgart · New York.
Methods for Evaluating Flammability Characteristics of Shipboard Materials
1994-02-28
E 23 • smoke optical properties; and • (toxic) gas production rates. In general, the prediction of these full-scale burning characteristics requires ...Method. The ASTM Room/Corner Test Method can be used to calculate the heat release rate of a material based upon oxygen depletion calorimetry. As can be...Clearly, more validation is required for the theoretical calculations . All are consistent in the use of calorimeter and UFT-type property data, all show
78 FR 7750 - Summer Food Service Program; 2013 Reimbursement Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
.... Reimbursement is based solely on a ``meals times rates'' calculation, without comparison to actual or budgeted... public of the annual adjustments to the reimbursement rates for meals served in the Summer Food Service... adjustments to the reimbursement rates for meals served in SFSP. In accordance with sections 12(f) and 13, 42...
76 FR 5328 - Summer Food Service Program; 2011 Reimbursement Rates
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-31
.... Since January 1, 2008, reimbursement has been based solely on a ``meals times rates'' calculation... public of the annual adjustments to the reimbursement rates for meals served in the Summer Food Service... reimbursement rates for meals served in the Summer Food Service Program (SFSP). In accordance with sections 12(f...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
On the predictability of outliers in ensemble forecasts
NASA Astrophysics Data System (ADS)
Siegert, S.; Bröcker, J.; Kantz, H.
2012-03-01
In numerical weather prediction, ensembles are used to retrieve probabilistic forecasts of future weather conditions. We consider events where the verification is smaller than the smallest, or larger than the largest ensemble member of a scalar ensemble forecast. These events are called outliers. In a statistically consistent K-member ensemble, outliers should occur with a base rate of 2/(K+1). In operational ensembles this base rate tends to be higher. We study the predictability of outlier events in terms of the Brier Skill Score and find that forecast probabilities can be calculated which are more skillful than the unconditional base rate. This is shown analytically for statistically consistent ensembles. Using logistic regression, forecast probabilities for outlier events in an operational ensemble are calculated. These probabilities exhibit positive skill which is quantitatively similar to the analytical results. Possible causes of these results as well as their consequences for ensemble interpretation are discussed.
NASA Astrophysics Data System (ADS)
Karaman, Rafik; Ghareeb, Hiba; Dajani, Khuloud Kamal; Scrano, Laura; Hallak, Hussein; Abu-Lafi, Saleh; Mecca, Gennaro; Bufo, Sabino A.
2013-07-01
Based on density functional theory (DFT) calculations for the acid-catalyzed hydrolysis of several maleamic acid amide derivatives four tranexamic acid prodrugs were designed. The DFT results on the acid catalyzed hydrolysis revealed that the reaction rate-limiting step is determined on the nature of the amine leaving group. When the amine leaving group was a primary amine or tranexamic acid moiety, the tetrahedral intermediate collapse was the rate-limiting step, whereas in the cases by which the amine leaving group was aciclovir or cefuroxime the rate-limiting step was the tetrahedral intermediate formation. The linear correlation between the calculated DFT and experimental rates for N-methylmaleamic acids 1- 7 provided a credible basis for designing tranexamic acid prodrugs that have the potential to release the parent drug in a sustained release fashion. For example, based on the calculated B3LYP/6-31G(d,p) rates the predicted t1/2 (a time needed for 50 % of the prodrug to be converted into drug) values for tranexamic acid prodrugs ProD 1- ProD 4 at pH 2 were 556 h [50.5 h as calculated by B3LYP/311+G(d,p)] and 6.2 h as calculated by GGA: MPW1K), 253 h, 70 s and 1.7 h, respectively. Kinetic study on the interconversion of the newly synthesized tranexamic acid prodrug ProD 1 revealed that the t1/2 for its conversion to the parent drug was largely affected by the pH of the medium. The experimental t1/2 values in 1 N HCl, buffer pH 2 and buffer pH 5 were 54 min, 23.9 and 270 h, respectively.
Beckers, Paul J; Possemiers, Nadine M; Van Craenenbroeck, Emeline M; Van Berendoncks, An M; Wuyts, Kurt; Vrints, Christiaan J; Conraads, Viviane M
2012-02-01
Exercise training efficiently improves peak oxygen uptake (V˙O2peak) in patients with chronic heart failure. To optimize training-derived benefit, higher exercise intensities are being explored. The correct identification of anaerobic threshold is important to allow safe and effective exercise prescription. During 48 cardiopulmonary exercise tests obtained in patients with chronic heart failure (59.6 ± 11 yrs; left ventricular ejection fraction, 27.9% ± 9%), ventilatory gas analysis findings and lactate measurements were collected. Three technicians independently determined the respiratory compensation point (RCP), the heart rate turning point (HRTP) and the second lactate turning point (LTP2). Thereafter, exercise intensity (target heart rate and workload) was calculated and compared between the three methods applied. Patients had significantly reduced maximal exercise capacity (68% ± 21% of predicted V˙O2peak) and chronotropic incompetence (74% ± 7% of predicted peak heart rate). Heart rate, workload, and V˙O2 at HRTP and at RCP were not different, but at LTP2, these parameters were significantly (P < 0.0001) higher. Mean target heart rate and target workload calculated using the LTP2 were 5% and 12% higher compared with those calculated using HRTP and RCP, respectively. The calculation of target heart rate based on LTP2 was 5% and 10% higher in 12 of 48 (25%) and 6 of 48 (12.5%) patients, respectively, compared with the other two methods. In patients with chronic heart failure, RCP and HRTP, determined during cardiopulmonary exercise tests, precede the occurrence of LTP2. Target heart rates and workloads used to prescribe tailored exercise training in patients with chronic heart failure based on LTP2 are significantly higher than those derived from HRTP and RCP.
Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code
NASA Astrophysics Data System (ADS)
Wemple, Charles; Zwermann, Winfried
2017-09-01
Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.
Gamma-ray spectra and doses from the Little Boy replica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, C.E.; Lucas, M.C.; Tisinger, E.W.
1984-01-01
Most radiation safety guidelines in the nuclear industry are based on the data concerning the survivors of the nuclear explosions at Hiroshima and Nagasaki. Crucial to determining these guidelines is the radiation from the explosions. We have measured gamma-ray pulse-height distributions from an accurate replica of the Little Boy device used at Hiroshima, operated at low power levels near critical. The device was placed outdoors on a stand 4 m from the ground to minimize environmental effects. The power levels were based on a monitor detector calibrated very carefully in independent experiments. High-resolution pulse-height distributions were acquired with a germaniummore » detector to identify the lines and to obtain line intensities. The 7631 to 7645 keV doublet from neutron capture in the heavy steel case was dominant. Low-resolution pulse-height distributions were acquired with bismuth-germanate detectors. We calculated flux spectra from these distributions using accurately measured detector response functions and efficiency curves. We then calculated dose-rate spectra from the flux spectra using a flux-to-dose-rate conversion procedure. The integral of each dose-rate spectrum gave an integral dose rate. The integral doses at 2 m ranged from 0.46 to 1.03 mrem per 10/sup 13/ fissions. The output of the Little Boy replica can be calculated with Monte Carlo codes. Comparison of our experimental spectra, line intensities, and integral doses can be used to verify these calculations at low power levels and give increased confidence to the calculated values from the explosion at Hiroshima. These calculations then can be used to establish better radiation safety guidelines. 7 references, 7 figures, 2 tables.« less
NASA Technical Reports Server (NTRS)
Barrett, C. A.; Lowell, C. E.
1974-01-01
The cyclic and isothermal oxidation resistance of 25 high-temperature Ni-, Co-, and Fe-base sheet alloys after 100 hours in air at 1150 C was compared. The alloys were evaluated in terms of their oxidation, scaling, and vaporization rates and their tendency for scale spallation. These values were used to develop an oxidation rating parameter based on effective thickness change, as calculated from a mass balance. The calculated thicknesses generally agreed with the measured values, including grain boundary oxidation, to within a factor of 3. Oxidation behavior was related to composition, particularly Cr and Al content.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-11
... calculation program which affects how we calculate the freight revenue cap. See Decision Memorandum at Comment... (or customer-specific) ad valorem assessment rates based on the ratio of the total amount of the... [[Page 61739
NASA Astrophysics Data System (ADS)
Ivanov, V.; Samokhin, A.; Danicheva, I.; Khrennikov, N.; Bouscuet, J.; Velkov, K.; Pasichnyk, I.
2017-01-01
In this paper the approaches used for developing of the BN-800 reactor test model and for validation of coupled neutron-physic and thermohydraulic calculations are described. Coupled codes ATHLET 3.0 (code for thermohydraulic calculations of reactor transients) and DYN3D (3-dimensional code of neutron kinetics) are used for calculations. The main calculation results of reactor steady state condition are provided. 3-D model used for neutron calculations was developed for start reactor BN-800 load. The homogeneous approach is used for description of reactor assemblies. Along with main simplifications, the main reactor BN-800 core zones are described (LEZ, MEZ, HEZ, MOX, blankets). The 3D neutron physics calculations were provided with 28-group library, which is based on estimated nuclear data ENDF/B-7.0. Neutron SCALE code was used for preparation of group constants. Nodalization hydraulic model has boundary conditions by coolant mass-flow rate for core inlet part, by pressure and enthalpy for core outlet part, which can be chosen depending on reactor state. Core inlet and outlet temperatures were chosen according to reactor nominal state. The coolant mass flow rate profiling through the core is based on reactor power distribution. The test thermohydraulic calculations made with using of developed model showed acceptable results in coolant mass flow rate distribution through the reactor core and in axial temperature and pressure distribution. The developed model will be upgraded in future for different transient analysis in metal-cooled fast reactors of BN type including reactivity transients (control rods withdrawal, stop of the main circulation pump, etc.).
Improved estimates of environmental copper release rates from antifouling products.
Finnie, Alistair A
2006-01-01
The US Navy Dome method for measuring copper release rates from antifouling paint in-service on ships' hulls can be considered to be the most reliable indicator of environmental release rates. In this paper, the relationship between the apparent copper release rate and the environmental release rate is established for a number of antifouling coating types using data from a variety of available laboratory, field and calculation methods. Apart from a modified Dome method using panels, all laboratory, field and calculation methods significantly overestimate the environmental release rate of copper from antifouling coatings. The difference is greatest for self-polishing copolymer antifoulings (SPCs) and smallest for certain erodible/ablative antifoulings, where the ASTM/ISO standard and the CEPE calculation method are seen to typically overestimate environmental release rates by factors of about 10 and 4, respectively. Where ASTM/ISO or CEPE copper release rate data are used for environmental risk assessment or regulatory purposes, it is proposed that the release rate values should be divided by a correction factor to enable more reliable generic environmental risk assessments to be made. Using a conservative approach based on a realistic worst case and accounting for experimental uncertainty in the data that are currently available, proposed default correction factors for use with all paint types are 5.4 for the ASTM/ISO method and 2.9 for the CEPE calculation method. Further work is required to expand this data-set and refine the correction factors through correlation of laboratory measured and calculated copper release rates with the direct in situ environmental release rate for different antifouling paints under a range of environmental conditions.
NASA Astrophysics Data System (ADS)
Naine, Tarun Bharath; Gundawar, Manoj Kumar
2017-09-01
We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.
NASA Astrophysics Data System (ADS)
Kouznetsov, A.; Cully, C. M.; Knudsen, D. J.
2016-12-01
Changes in D-Region ionization caused by energetic particle precipitation are monitored by the Array for Broadband Observations of VLF/ELF Emissions (ABOVE) - a network of receivers deployed across Western Canada. The observed amplitudes and phases of subionospheric-propagating VLF signals from distant artificial transmitters depend sensitively on the free electron population created by precipitation of energetic charged particles. Those include both primary (electrons, protons and heavier ions) and secondary (cascades of ionized particles and electromagnetic radiation) components. We have designed and implemented a full-scale model to predict the received VLF signals based on first-principle charged particle transport calculations coupled to the Long Wavelength Propagation Capability (LWPC) software. Calculations of ionization rates and free electron densities are based on MCNP-6 (a general-purpose Monte Carlo N- Particle) software taking advantage of its capability of coupled neutron/photon/electron transport and novel library of cross-sections for low-energetic electron and photon interactions with matter. Cosmic ray calculations of background ionization are based on source spectra obtained both from PAMELA direct Cosmic Rays spectra measurements and based on the recently-implemented MCNP 6 galactic cosmic-ray source, scaled using our (Calgary) neutron monitor measurement results. Conversion from calculated fluxes (MCNP F4 tallies) to ionization rates for low-energy electrons are based on the total ionization cross-sections for oxygen and nitrogen molecules from the National Institute of Standard and Technology. We use our model to explore the complexity of the physical processes affecting VLF propagation.
Collisional excitation of HC3N by para- and ortho-H2
NASA Astrophysics Data System (ADS)
Faure, Alexandre; Lique, François; Wiesenfeld, Laurent
2016-08-01
New calculations for rotational excitation of cyanoacetylene by collisions with hydrogen molecules are performed to include the lowest 38 rotational levels of HC3N and kinetic temperatures to 300 K. Calculations are based on the interaction potential of Wernli et al. whose accuracy is checked against spectroscopic measurements of the HC3N-H2 complex. The quantum coupled-channel approach is employed and complemented by quasi-classical trajectory calculations. Rate coefficients for ortho-H2 are provided for the first time. Hyperfine resolved rate coefficients are also deduced. Collisional propensity rules are discussed and comparisons between quantum and classical rate coefficients are presented. This collisional data should prove useful in interpreting HC3N observations in the cold and warm ISM, as well as in protoplanetary discs.
Methods for Determining Spontaneous Mutation Rates
Foster, Patricia L.
2007-01-01
Spontaneous mutations arise as a result of cellular processes that act upon or damage DNA. Accurate determination of spontaneous mutation rates can contribute to our understanding of these processes and the enzymatic pathways that deal with them. The methods that are used to calculate mutation rates are based on the model for the expansion of mutant clones originally described by Luria and Delbrück and extended by Lea and Coulson. The accurate determination of mutation rates depends on understanding the strengths and limitations of these methods and how to optimize a fluctuation assay for a given method. This chapter describes the proper design of a fluctuation assay, several of the methods used to calculate mutation rates, and ways to evaluate the results statistically. PMID:16793403
Ab initio study of energy transfer rates and impact sensitivities of crystalline explosives.
Bernstein, Jonathan
2018-02-28
Impact sensitivities of various crystalline explosives were predicted by means of plane wave-density functional theory calculations. Crystal structures and complete vibrational spectra of TATB, PETN, FOX7, TEX, 14DNI, and β-HMX molecular crystals were calculated. A correlation between the phonon-vibron coupling (which is proportionally related to the energy transfer rate between the phonon manifold and the intramolecular vibrational modes) and impact sensitivities of secondary explosives was found. We propose a method, based on ab initio calculations, for the evaluation of impact sensitivities, which consequently can assist in screening candidates for chemical synthesis of high energetic materials.
Ab initio study of energy transfer rates and impact sensitivities of crystalline explosives
NASA Astrophysics Data System (ADS)
Bernstein, Jonathan
2018-02-01
Impact sensitivities of various crystalline explosives were predicted by means of plane wave-density functional theory calculations. Crystal structures and complete vibrational spectra of TATB, PETN, FOX7, TEX, 14DNI, and β-HMX molecular crystals were calculated. A correlation between the phonon-vibron coupling (which is proportionally related to the energy transfer rate between the phonon manifold and the intramolecular vibrational modes) and impact sensitivities of secondary explosives was found. We propose a method, based on ab initio calculations, for the evaluation of impact sensitivities, which consequently can assist in screening candidates for chemical synthesis of high energetic materials.
Method and system for measuring multiphase flow using multiple pressure differentials
Fincke, James R.
2001-01-01
An improved method and system for measuring a multiphase flow in a pressure flow meter. An extended throat venturi is used and pressure of the multiphase flow is measured at three or more positions in the venturi, which define two or more pressure differentials in the flow conduit. The differential pressures are then used to calculate the mass flow of the gas phase, the total mass flow, and the liquid phase. The method for determining the mass flow of the high void fraction fluid flow and the gas flow includes certain steps. The first step is calculating a gas density for the gas flow. The next two steps are finding a normalized gas mass flow rate through the venturi and computing a gas mass flow rate. The following step is estimating the gas velocity in the venturi tube throat. The next step is calculating the pressure drop experienced by the gas-phase due to work performed by the gas phase in accelerating the liquid phase between the upstream pressure measuring point and the pressure measuring point in the venturi throat. Another step is estimating the liquid velocity in the venturi throat using the calculated pressure drop experienced by the gas-phase due to work performed by the gas phase. Then the friction is computed between the liquid phase and a wall in the venturi tube. Finally, the total mass flow rate based on measured pressure in the venturi throat is calculated, and the mass flow rate of the liquid phase is calculated from the difference of the total mass flow rate and the gas mass flow rate.
Calculation of equivalent friction coefficient for castor seed by single screw press
NASA Astrophysics Data System (ADS)
Liu, R.; Xiao, Z.; Li, C.; Zhang, L.; Li, P.; Li, H.; Zhang, A.; Tang, S.; Sun, F.
2017-08-01
Based on the traction angle and transportation rate equation, castor beans were pressed by application of single screw under different cake diameter and different screw speed. The results showed that the greater the cake diameter and screw rotation speed, the greater the actual transmission rate was. The equivalent friction coefficient was defined and calculated as 0.4136, and the friction coefficients between press material and screw, bar cage were less than the equivalent friction coefficient value.
Delamination modeling of laminate plate made of sublaminates
NASA Astrophysics Data System (ADS)
Kormaníková, Eva; Kotrasová, Kamila
2017-07-01
The paper presents the mixed-mode delamination of plates made of sublaminates. To this purpose an opening load mode of delamination is proposed as failure model. The failure model is implemented in ANSYS code to calculate the mixed-mode delamination response as energy release rate. The analysis is based on interface techniques. Within the interface finite element modeling there are calculated the individual components of damage parameters as spring reaction forces, relative displacements and energy release rates along the lamination front.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, B; Vancouver Cancer Centre, Vancouver, BC; Gete, E
2016-06-15
Purpose: This work investigates the dosimetric accuracy of a trajectory based delivery technique in which an optimized radiation beam is delivered along a Couch-Gantry trajectory that is formed by simultaneous rotation of the linac gantry and the treatment couch. Methods: Nine trajectory based cranial SRS treatment plans were created using in-house optimization software. The plans were calculated for delivery on the TrueBeam STx linac with 6MV photon beam. Dose optimization was performed along a user-defined trajectory using MLC modulation, dose rate modulation and jaw tracking. The pre-defined trajectory chosen for this study is formed by a couch rotation through itsmore » full range of 180 degrees while the gantry makes four partial arc sweeps which are 170 degrees each. For final dose calculation, the trajectory based plans were exported to the Varian Eclipse Treatment Planning System. The plans were calculated on a homogeneous cube phantom measuring 18.2×18.2×18.2 cm3 with the analytical anisotropic algorithm (AAA) using a 1mm3 calculation voxel. The plans were delivered on the TrueBeam linac via the developer’s mode. Point dose measurements were performed on 9 patients with the IBA CC01 mini-chamber with a sensitive volume of 0.01 cc. Gafchromic film measurements along the sagittal and coronal planes were performed on three of the 9 treatment plans. Point dose values were compared with ion chamber measurements. Gamma analysis comparing film measurement and AAA calculations was performed using FilmQA Pro. Results: The AAA calculations and measurements were in good agreement. The point dose difference between AAA and ion chamber measurements were within 2.2%. Gamma analysis test pass rates (2%, 2mm passing criteria) for the Gafchromic film measurements were >95%. Conclusion: We have successfully tested TrueBeam’s ability to deliver accurate trajectory based treatments involving simultaneous gantry and couch rotation with MLC and dose rate modulation along the trajectory.« less
Microcomputer Calculation of Theoretical Pre-Exponential Factors for Bimolecular Reactions.
ERIC Educational Resources Information Center
Venugopalan, Mundiyath
1991-01-01
Described is the application of microcomputers to predict reaction rates based on theoretical atomic and molecular properties taught in undergraduate physical chemistry. Listed is the BASIC program which computes the partition functions for any specific bimolecular reactants. These functions are then used to calculate the pre-exponential factor of…
Computer supplies insulation recipe for Cookie Company Roof
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Roofing contractors no longer have to rely on complicated calculations and educated guesses to determine cost-efficient levels of roof insulation. A simple hand-held calculator and printer offers seven different programs for fast figuring insulation thickness based on job type, roof size, tax rates, and heating and cooling cost factors.
Projects in a Solar Energy Course.
ERIC Educational Resources Information Center
Lindsay, Richard H.
1983-01-01
Describes student projects on applications of solar energy optics to home design. Project criterion (requiring sketches and detailed calculations of time rate of energy flow/production) is that half the heat for the heating season be taken from the solar resource; calculations must be based on meteorological data for a specific location. (JM)
76 FR 56430 - Boulder Canyon Project
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-13
... Secretary of Energy approves the Fiscal Year (FY) 2012 Base Charge and Rates (Rates) for Boulder Canyon... calculate the Rates and held a question and answer session. 3. At the public information forum held on April... for FY 2012 in greater detail and held a question and answer session. 4. A public comment forum held...
29 CFR 4022.81 - General rules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... thereon for that month using— (i) For months after May 1998, the applicable federal mid-term rate (as... (or, where the rate for a month is not available at the time the PBGC calculates the amount to be recouped or reimbursed, the most recent month for which the rate is available) based on monthly compounding...
NASA Astrophysics Data System (ADS)
Houpert, Loïc; Testor, Pierre; Durrieu de Madron, Xavier; Somot, Samuel; D'Ortenzio, Fabrizio; Estournel, Claude; Lavigne, Héloïse
2014-05-01
We present a relatively high resolution Mediterranean climatology (0.5°x0.5°x12 months) of the seasonal thermocline based on a comprehensive collection of temperature profiles of the last 44 years (1969-2012). The database includes more than 190,000 profiles, merging CTD, XBT, profiling floats, and gliders observations. This data set is first used to describe the seasonal cycle of the mixed layer depth and of the seasonal thermocline and on the whole Mediterranean on a monthly climatological basis. Our analysis discriminates several regions with coherent behaviors, in particular the deep water formation sites, characterized by significant differences in the winter mixing intensity. Heat Storage Rate (HSR) is calculated as the time rate of change of the heat content due to variations in the temperature integrated from the surface down to the base of the seasonal thermocline. Heat Entrainment Rate (HER) is calculated as the time rate of change of the heat content due to the deepening of thermocline base. We propose a new independent estimate of the seasonal cycle of the Net surface Heat Flux, calculated on average over the Mediterranean Sea for the 1979-2011 period, based only on in-situ observations. We used our new climatologies of HSR and of HER, combined to existing climatology of the horizontal heat flux at Gibraltar Strait. Although there is a good agreement between our estimation of NHF, from observations, with modeled NHF, some differences may be noticed during specific periods. A part of these differences may be explained by the high temporal and spatial variability of the Mixed Layer Depth and of the seasonal thermocline, responsible for very localized heat transfer in the ocean.
NASA Astrophysics Data System (ADS)
Sharma, Manju; Sharma, Veena; Kumar, Sanjeev; Puri, S.; Singh, Nirmal
2006-11-01
The M ξ, M αβ, M γ and M m X-ray production (XRP) cross-sections have been measured for the elements with 71⩽ Z⩽92 at 5.96 keV incident photon energy satisfying EM1< Einc< EL3, where EM1(L3) is the M 1(L 3) subshell binding energy. These XRP cross-sections have been calculated using photoionization cross-sections based on the relativistic Dirac-Hartree-Slater (RDHS) model with three sets of X-ray emission rates, fluorescence, Coster-Kronig and super Coster-Kronig yields based on (i) the non-relativistic Hartree-Slater (NRHS) potential model, (ii) the RDHS model and (iii) the relativistic Dirac-Fock (RDF) model. For the third set, the M i ( i=1-5) subshell fluorescence yields have been calculated using the RDF model-based X-ray emission rates and total widths reevaluated to incorporate the RDF model-based radiative widths. The measured cross-sections have been compared with the calculated values to check the applicability of the physical parameters based on different models.
NASA Astrophysics Data System (ADS)
Zhang, L.; Li, Y. R.; Zhou, L. Q.; Wu, C. M.
2017-11-01
In order to understand the influence of various factors on the evaporation rate on the vapor-liquid interface, the evaporation process of water in pure steam environment was calculated based on the statistical rate theory (SRT), and the results were compared with those from the traditional Hertz-Knudsen equation. It is found that the evaporation rate on the vapor-liquid interface increases with the increase of evaporation temperature and evaporation temperature difference and the decrease of vapor pressure. When the steam is in a superheated state, even if the temperature of the liquid phase is lower than that of the vapor phase, the evaporation may also occur on the vapor-liquid interface; at this time, the absolute value of the critical temperature difference for occurring evaporation decreases with the increase of vapor pressure. When the evaporation temperature difference is smaller, the theoretical calculation results based on the SRT are basically the same as the predicated results from the Hertz-Knudsen equation; but the deviation between them increases with the increase of temperature difference.
Ivins, Brian J; Lange, Rael T; Cole, Wesley R; Kane, Robert; Schwab, Karen A; Iverson, Grant L
2015-02-01
Base rates of low ANAM4 TBI-MIL scores were calculated in a convenience sample of 733 healthy male active duty soldiers using available military reference values for the following cutoffs: ≤2nd percentile (2 SDs), ≤5th percentile, <10th percentile, and <16th percentile (1 SD). Rates of low scores were also calculated in 56 active duty male soldiers who sustained an mTBI an average of 23 days (SD = 36.1) prior. 22.0% of the healthy sample and 51.8% of the mTBI sample had two or more scores below 1 SD (i.e., 16th percentile). 18.8% of the healthy sample and 44.6% of the mTBI sample had one or more scores ≤5th percentile. Rates of low scores in the healthy sample were influenced by cutoffs and race/ethnicity. Importantly, some healthy soldiers obtain at least one low score on ANAM4. These base rate analyses can improve the methodology for interpreting ANAM4 performance in clinical practice and research. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Continuous Method for Gene Flow
Palczewski, Michal; Beerli, Peter
2013-01-01
Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937
NASA Astrophysics Data System (ADS)
Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming
2016-12-01
The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.
Medication calculation: the potential role of digital game-based learning in nurse education.
Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle
2013-12-01
Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Atsushi; Kojima, Hidekazu; Okazaki, Susumu, E-mail: okazaki@apchem.nagoya-u.ac.jp
2014-08-28
In order to investigate proton transfer reaction in solution, mixed quantum-classical molecular dynamics calculations have been carried out based on our previously proposed quantum equation of motion for the reacting system [A. Yamada and S. Okazaki, J. Chem. Phys. 128, 044507 (2008)]. Surface hopping method was applied to describe forces acting on the solvent classical degrees of freedom. In a series of our studies, quantum and solvent effects on the reaction dynamics in solutions have been analysed in detail. Here, we report our mixed quantum-classical molecular dynamics calculations for intramolecular proton transfer of malonaldehyde in water. Thermally activated proton transfermore » process, i.e., vibrational excitation in the reactant state followed by transition to the product state and vibrational relaxation in the product state, as well as tunneling reaction can be described by solving the equation of motion. Zero point energy is, of course, included, too. The quantum simulation in water has been compared with the fully classical one and the wave packet calculation in vacuum. The calculated quantum reaction rate in water was 0.70 ps{sup −1}, which is about 2.5 times faster than that in vacuum, 0.27 ps{sup −1}. This indicates that the solvent water accelerates the reaction. Further, the quantum calculation resulted in the reaction rate about 2 times faster than the fully classical calculation, which indicates that quantum effect enhances the reaction rate, too. Contribution from three reaction mechanisms, i.e., tunneling, thermal activation, and barrier vanishing reactions, is 33:46:21 in the mixed quantum-classical calculations. This clearly shows that the tunneling effect is important in the reaction.« less
Weimar, C; Stausberg, J; Kraywinkel, K; Wagner, M; Busse, O; Haberl, R L; Diener, H-C
2002-08-02
The upcoming introduction of diagnosis related groups (DRG) as an exclusive base for future calculation of hospital proceeds in Germany requires a thorough analysis of cost data for various diseases. To compare the resulting combined cost weights of the Australian Refined DRG system (AR-DRG) with the proceeds based on actual per-day rates in stroke treatment. Between 1998 and 1999, data from 6520 patients (median age 68 years, 43% women) with acute stroke or transient ischemic attack (TIA) were prospectively documented in 15 departments of Neurology with an acute stroke unit, 9 departments of general Neurology and 6 departments of Internal Medicine. Prior to grouping cases into DRGs, all available data were transferred into ICD-10-SGB-V 2.0 or the Australian procedure system (MBS-Extended). Hospital proceeds for the respective cases were calculated based on per-day rates of the documenting hospitals. The resulting cost weights demonstrate a good homogeneity compared to the length of stay. When introducing the AR-DRG with a uniform base rate in Germany, a relative decrease of hospital proceeds can be expected in Neurology Departments and for treatment of TIAs. Preservation of the existing structure of acute stroke care in Germany requires a supplement to a uniform base rate in Neurology departments.
Hyperfine excitation of CH in collisions with atomic and molecular hydrogen
NASA Astrophysics Data System (ADS)
Dagdigian, Paul J.
2018-04-01
We investigate here the excitation of methylidene (CH) induced by collisions with atomic and molecular hydrogen (H and H2). The hyperfine-resolved rate coefficients were obtained from close coupling nuclear-spin-free scattering calculations. The calculations are based upon recent, high-accuracy calculations of the CH(X2Π)-H(2S) and CH(X2Π)-H2 potential energy surfaces. Cross-sections and rate coefficients for collisions with atomic H, para-H2, and ortho-H2 were computed for all transitions between the 32 hyperfine levels for CH(X2Π) involving the n ≤ 4 rotational levels for temperatures between 10 and 300 K. These rate coefficients should significantly aid in the interpretation of astronomical observations of CH spectra. As a first application, the excitation of CH is simulated for conditions in typical molecular clouds.
Variations of comoving volume and their effects on the star formation rate density
NASA Astrophysics Data System (ADS)
Kim, Sungeun; Physics and Astronomy, Sejong University, Seoul, Korea (the Republic of).
2018-01-01
To build a comprehensive picture of star formation in the universe, we havedeveloped an application to calculate the comoving volume at a specific redshift and visualize the changes of spaceand time. The application is based on the star formation rates of about a few thousands of galaxies and their redshiftvalues. Three dimensional modeling of these galaxies using the redshift, comoving volume, and star formation ratesas input data allows calculation of the star formation rate density corresponding to the redshift. This work issupported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP)(no. 2017037333).
Chlorite Dissolution Rates From 25 to 275 degrees and pH 3 to 10
Carroll, Susan
2013-09-27
We have calculated a chlorite dissolution rate equation at far from equilibrium conditions by combining new data (20 experiments at high temperature) with previously published data Smith et al. 2013 and Lowson et al. 2007. All rate data (from the 127 experiments) are tabulated in this data submission. More information on the calculation of the rate data can be found in our FY13 Annual support (Carroll LLNL, 2013) which has been submitted to the GDR. The rate equation fills a data gap in geothemal kinetic data base and can be used directly to estimate the impact of chemical alteration on all geothermal processes. It is especially important for understanding the role of chemical alteration in the weakening for shear zones in EGS systems.
Modelling of Heat and Moisture Loss Through NBC Ensembles
1991-11-01
the heat and moisture transport through various NBC clothing ensembles. The analysis involves simplifying the three dimensional physical problem of... clothing on a person to that of a one dimensional problem of flow through parallel layers of clothing and air. Body temperatures are calculated based on...prescribed work rates, ambient conditions and clothing properties. Sweat response and respiration rates are estimated based on empirical data to
Calculating the Candy Price Index: A Classroom Inflation Experiment.
ERIC Educational Resources Information Center
Hazlett, Denise; Hill, Cynthia D.
2003-01-01
Outlines how students develop a price index based on candy-purchasing decisions made by class members. Explains that students used the index to practice calculating inflation rates and to consider the strengths and weaknesses of the consumer price index (CPI). States that the exercise has been used in introductory and intermediate macroeconomics…
42 CFR 419.32 - Calculation of prospective payment rates for hospital outpatient services.
Code of Federal Regulations, 2010 CFR
2010-10-01
... outpatient services furnished in 1999 would have equaled the base expenditure target calculated in § 419.30... inpatient market basket percentage increase applicable under section 1886(b)(3)(B)(iii) of the Act reduced... 1, 2001 and before April 1, 2001, by the hospital inpatient market basket percentage increase...
ERIC Educational Resources Information Center
Dinsmore, Daniel L.; Parkinson, Meghan M.
2013-01-01
Although calibration has been widely studied, questions remain about how best to capture confidence ratings, how to calculate continuous variable calibration indices, and on what exactly students base their reported confidence ratings. Undergraduates in a research methods class completed a prior knowledge assessment, two sets of readings and…
Montes Ruiz-Cabello, F Javier; Trefalt, Gregor; Oncsik, Tamas; Szilagyi, Istvan; Maroni, Plinio; Borkovec, Michal
2015-06-25
Force profiles and aggregation rates involving positively and negatively charged polystyrene latex particles are investigated in monovalent electrolyte solutions, whereby the counterions are varied within the Hofmeister series. The force measurements are carried out with the colloidal probe technique, which is based on the atomic force microscope (AFM), while the aggregation rates are measured with time-resolved multiangle light scattering. The interaction force profiles cannot be described by classical DLVO theory, but an additional attractive short-ranged force must be included. An exponential force profile with a decay length of about 0.5 nm is consistent with the measured forces. Furthermore, the Hamaker constants extracted from the measured force profiles are substantially smaller than the theoretical values calculated from dielectric spectra. The small surface roughness of the latex particles (below 1 nm) is probably responsible for this deviation. Based on the measured force profiles, the aggregation rates can be predicted without adjustable parameters. The measured absolute aggregation rates in the fast regime are somewhat lower than the calculated ones. The critical coagulation concentration (CCC) agrees well with the experiment, including the respective shifts of the CCC within the Hofmeister series. These shifts are particularly pronounced for the positively charged particles. However, the consideration of the additional attractive short-ranged force is essential to quantify these shifts correctly. In the slow regime, the calculated rates are substantially smaller than the experimental ones. This disagreement is probably related to surface charge heterogeneities.
Fee, David; Izbekov, Pavel; Kim, Keehoon; ...
2017-10-09
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
NASA Astrophysics Data System (ADS)
Yu, Q. Z.; Liang, T. J.
2018-06-01
China Spallation Neutron Source (CSNS) is intended to begin operation in 2018. CSNS is an accelerator-base multidisciplinary user facility. The pulsed neutrons are produced by a 1.6GeV short-pulsed proton beam impinging on a W-Ta spallation target, at a beam power of100 kW and a repetition rate of 25 Hz. 20 neutron beam lines are extracted for the neutron scattering and neutron irradiation research. During the commissioning and maintenance scenarios, the gamma rays induced from the W-Ta target can cause the dose threat to the personal and the environment. In this paper, the gamma dose rate distributions for the W-Ta spallation are calculated, based on the engineering model of the target-moderator-reflector system. The shipping cask is analyzed to satisfy the dose rate limit that less than 2 mSv/h at the surface of the shipping cask. All calculations are performed by the Monte carlo code MCNPX2.5 and the activation code CINDER’90.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fee, David; Izbekov, Pavel; Kim, Keehoon
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code
NASA Astrophysics Data System (ADS)
Faghihi, F.; Mehdizadeh, S.; Hadad, K.
Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.
NASA Astrophysics Data System (ADS)
Nabi, Jameel-Un; Ishfaq, Mavra; Böyükata, Mahmut; Riaz, Muhammad
2017-10-01
At finite temperatures (≥ 107K), 76Se is abundant in the core of massive stars and electron capture on 76Se has a consequential role to play in the dynamics of core-collapse. The present work may be classified into two main categories. In the first phase we study the nuclear structure properties of 76Se using the interacting boson model-1 (IBM-1). The IBM-1 investigations include the energy levels, B (E 2) values and the prediction of the geometry. We performed the extended consistent-Q formalism (ECQF) calculation and later the triaxial formalism calculation (constructed by adding the cubic term to the ECQF). The geometry of 76Se can be envisioned within the formalism of the potential energy surface based on the classical limit of IBM-1 model. In the second phase, we reconfirm the unblocking of the Gamow-Teller (GT) strength in 76Se (a test case for nuclei having N > 40 and Z < 40). Using the deformed pn-QRPA model we calculate GT transitions, stellar electron capture cross section (within the limit of low momentum transfer) and stellar weak rates for 76Se. The distinguishing feature of our calculation is a state-by-state evaluation of stellar weak rates in a fully microscopic fashion. Results are compared with experimental data and previous calculations. The calculated GT distribution fulfills the Ikeda sum rule. Rates for β-delayed neutrons and emission probabilities are also calculated. Our study suggests that at high stellar temperatures and low densities, the β+-decay on 76Se should not be neglected and needs to be taken into consideration along with electron capture rates for simulation of presupernova evolution of massive stars.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2015-01-01
Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.
Energy balance in solar and stellar chromospheres
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1981-01-01
Net radiative cooling rates for quiet and active regions of the solar chromosphere and for two stellar chromospheres are calculated from corresponding atmospheric models. Models of chromospheric temperature and microvelocity distributions are derived from observed spectra of a dark point within a cell, the average sun and a very bright network element on the quiet sun, a solar plage and flare, and the stars Alpha Boo and Lambda And. Net radiative cooling rates due to the transitions of various atoms and ions are then calculated from the models as a function of depth. Large values of the net radiative cooling rate are found at the base of the chromosphere-corona transition region which are due primarily to Lyman alpha emission, and a temperature plateau is obtained in the transition region itself. In the chromospheric regions, the calculated cooling rate is equal to the mechanical energy input as a function of height and thus provides a direct constraint on theories of chromospheric heating.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Böcklin, Christoph, E-mail: boecklic@ethz.ch; Baumann, Dirk; Fröhlich, Jürg
A novel way to attain three dimensional fluence rate maps from Monte-Carlo simulations of photon propagation is presented in this work. The propagation of light in a turbid medium is described by the radiative transfer equation and formulated in terms of radiance. For many applications, particularly in biomedical optics, the fluence rate is a more useful quantity and directly derived from the radiance by integrating over all directions. Contrary to the usual way which calculates the fluence rate from absorbed photon power, the fluence rate in this work is directly calculated from the photon packet trajectory. The voxel based algorithmmore » works in arbitrary geometries and material distributions. It is shown that the new algorithm is more efficient and also works in materials with a low or even zero absorption coefficient. The capabilities of the new algorithm are demonstrated on a curved layered structure, where a non-scattering, non-absorbing layer is sandwiched between two highly scattering layers.« less
NASA Astrophysics Data System (ADS)
Longfellow, B.; Gade, A.; Brown, B. A.; Richter, W. A.; Bazin, D.; Bender, P. C.; Bowry, M.; Elman, B.; Lunderberg, E.; Weisshaar, D.; Williams, S. J.
2018-05-01
Energy levels and branching ratios for the rp-process nucleus 25Si were determined from the reactions 9Be(26Si,25Si)X and 9Be(25Al,25Si)X using in-beam γ -ray spectroscopy with both high-efficiency and high-resolution detector arrays. Proton-unbound states at 3695(14) and 3802(11) keV were identified and assigned tentative spins and parities based on comparison to theory and the mirror nucleus. The 24Al(p ,γ )25Si reaction rate was calculated using the experimental states and states from charge-dependent USDA and USDB shell-model calculations with downward shifts of the 1 s1 /2 proton orbital to account for the observed Thomas-Ehrman shift, leading to a factor of 10-100 increase in rate for the temperature region of 0.22 GK as compared to a previous calculation. These shifts may be applicable to neighboring nuclei, impacting the proton capture rates in this region of the chart.
WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides
NASA Astrophysics Data System (ADS)
Ma, Xiuzeng; Li, Yingkui; Bourgeois, Mike; Caffee, Marc; Elmore, David; Granger, Darryl; Muzikar, Paul; Smith, Preston
2007-06-01
Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for 10Be and 26Al has been finished and published at http://www.physics.purdue.edu/primelab/for_users/rockage.html. WebCN for 36Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.
NASA Astrophysics Data System (ADS)
Ruslan, Siti Zaharah Mohd; Jaffar, Maheran Mohd
2017-05-01
Islamic banking in Malaysia offers variety of products based on Islamic principles. One of the concepts is a diminishing musyarakah. The concept of diminishing musyarakah helps Muslims to avoid transaction which are based on riba. The diminishing musyarakah can be defined as an agreement between capital provider and entrepreneurs that enable entrepreneurs to buy equity in instalments where profits and losses are shared based on agreed ratio. The objective of this paper is to determine the internal rate of return (IRR) for a diminishing musyarakah model by applying a numerical method. There are several numerical methods in calculating the IRR such as by using an interpolation method and a trial and error method by using Microsoft Office Excel. In this paper we use a bisection method and secant method as an alternative way in calculating the IRR. It was found that the diminishing musyarakah model can be adapted in managing the performance of joint venture investments. Therefore, this paper will encourage more companies to use the concept of joint venture in managing their investments performance.
Kimura, Koji; Sawa, Akihiro; Akagi, Shinji; Kihira, Kenji
2007-06-01
We have developed an original system to conduct surgical site infection (SSI) surveillance. This system accumulates SSI surveillance information based on the National Nosocomial Infections Surveillance (NNIS) System and the Japanese Nosocomial Infections Surveillance (JNIS) System. The features of this system are as follows: easy input of data, high generality, data accuracy, SSI rate by operative procedure and risk index category (RIC) can be promptly calculated and compared with the current NNIS SSI rate, and the SSI rates and accumulated data can be exported electronically. Using this system, we monitored 798 patients in 24 operative procedure categories in the Digestive Organs Surgery Department of Mazda Hospital, Mazda Motor Corporation, from January 2004 through December 2005. The total number and rate of SSI were 47 and 5.89%, respectively. The SSI rates of 777 patients were calculated based on 15 operative procedure categories and Risk Index Categories (RIC). The highest SSI rate was observed in the rectum surgery of RIC 1 (30%), followed by the colon surgery of RIC3 (28.57%). About 30% of the isolated infecting bacteria were Enterococcus faecalis, Staphylococcus aureus, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Escherichia coli. Using quantification theory type 2, the American Society of Anesthesiology score (4.531), volume of hemorrhage under operation (3.075), wound classification (1.76), operation time (1.352), and history of diabetes (0.989) increased to higher ranks as factors for SSI. Therefore, we evaluated this system as a useful tool in safety control for operative procedures.
Karadeniz, S T; Akgul, S U; Ogret, Y; Ciftci, H S; Bayraktar, A; Bakkaloglu, H; Caliskan, Y; Yelekci, K; Turkmen, A; Aydin, A E; Oguz, F S; Carin, M; Aydin, F
2017-04-01
High rates of panel-reactive antibody (PRA) may decrease the chance of kidney transplantation and may result in long waiting periods before transplantation. The calculated PRA (cPRA) is performed based on unacceptable HLA antigens. These antigens are identified by a program that was created based on the antibodies that developed against the HLA antigens circulating in serum and on the risk of binding of these antibodies to antigens. The antigen profile of the population and antigen frequencies can be measured, and more realistic cPRA positivity rates may be obtained using this method. We developed a program based on the HLA antigens of 494 blood donors in 2 European Federation for Immunogenetics-accredited Tissue Typing Laboratories in Turkey. Next-generation sequencing-based tissue typing (HLA-A, -B, -C, -DR, -DQ, 4 digits) of the samples was performed. The PRA screening test was performed on 380 patients who were waiting for organ transplant from a cadaver in Istanbul Faculty of Medicine. The single antigen bead assay testing was performed to identify the antibody profiles on 48 hypersensitized patients. The PRA testing results using the current methods were 44.6% ± 18.5%, and the cPRA rate was 86.2% ± 5.1%. The mean PRA positivity of the sensitized patients using the current methods was 44.6%; however, the rate was 86.2% using the cPRA. cPRA shows the rate of the rejected donors according to all unacceptable antigens. The need for a list of unacceptable antigens in place of the PRA positivity rate is a real change in the sensitization-dependent calculation as cPRA positivity rate. In principal, implementation of cPRA will encourage many centers and laboratories to adopt a standard measurement of sensitization in Turkey. It will increase the chances of better donor match, particularly for hypersensitized patients, by the creation of an unacceptable mismatch program using cPRA software. Copyright © 2017 Elsevier Inc. All rights reserved.
Microprocessor-Based Valved Controller
NASA Technical Reports Server (NTRS)
Norman, Arnold M., Jr.
1987-01-01
New controller simpler, more precise, and lighter than predecessors. Mass-flow controller compensates for changing supply pressure and temperature such as occurs when gas-supply tank becomes depleted. By periodically updating calculation of mass-flow rate, controller determines correct new position for valve and keeps mass-flow rate nearly constant.
Wang, Jinhu; Pengthaisong, Salila; Cairns, James R Ketudat; Liu, Yongjun
2013-02-01
Nucleophile mutants of retaining β-glycosidase can act as glycosynthases to efficiently catalyze the synthesis of oligosaccharides. Previous studies proved that rice BGlu1 mutants E386G, E386S and E386A catalyze the oligosaccharide synthesis with different rates. The E386G mutant gave the fastest transglucosylation rate, which was approximately 3- and 19-fold faster than those of E386S and E386A. To account for the differences of their activities, in this paper, the X-ray crystal structures of BGlu1 mutants E386S and E386A were solved and compared with that of E386G mutant. However, they show quite similar active sites, which implies that their activities cannot be elucidated from the crystal structures alone. Therefore, a combined quantum mechanical/molecular mechanical (QM/MM) calculations were further performed. Our calculations reveal that the catalytic reaction follows a single-step mechanism, i.e., the extraction of proton by the acid/base, E176, and the formation of glycosidic bond are concerted. The energy barriers are calculated to be 19.9, 21.5 and 21.9kcal/mol for the mutants of E386G, E386S and E386A, respectively, which is consistent with the order of their experimental relative activities. But based on the calculated activation energies, 1.1kcal/mol energy difference may translate to nearly 100 fold rate difference. Although the rate limiting step in these mutants has not been established, considering the size of the product and the nature of the active site, it is likely that the product release, rather than chemistry, is rate limiting in these oligosaccharides synthesis catalyzed by BGlu1 mutants. Copyright © 2012 Elsevier B.V. All rights reserved.
Yao, Qian; Cao, Xiao-Mei; Zong, Wen-Gang; Sun, Xiao-Hui; Li, Ze-Rong; Li, Xiang-Yuan
2018-05-31
The isodesmic reaction method is applied to calculate the potential energy surface (PES) along the reaction coordinates and the rate constants of the barrierless reactions for unimolecular dissociation reactions of alkanes to form two alkyl radicals and their reverse recombination reactions. The reaction class is divided into 10 subclasses depending upon the type of carbon atoms in the reaction centers. A correction scheme based on isodesmic reaction theory is proposed to correct the PESs at UB3LYP/6-31+G(d,p) level. To validate the accuracy of this scheme, a comparison of the PESs at B3LYP level and the corrected PESs with the PESs at CASPT2/aug-cc-pVTZ level is performed for 13 representative reactions, and it is found that the deviations of the PESs at B3LYP level are up to 35.18 kcal/mol and are reduced to within 2 kcal/mol after correction, indicating that the PESs for barrierless reactions in a subclass can be calculated meaningfully accurately at a low level of ab initio method using our correction scheme. High-pressure limit rate constants and pressure dependent rate constants of these reactions are calculated based on their corrected PESs and the results show the pressure dependence of the rate constants cannot be ignored, especially at high temperatures. Furthermore, the impact of molecular size on the pressure-dependent rate constants of decomposition reactions of alkanes and their reverse reactions has been studied. The present work provides an effective method to generate meaningfully accurate PESs for large molecular system.
Minakata, Daisuke; Crittenden, John
2011-04-15
The hydroxyl radical (HO(•)) is a strong oxidant that reacts with electron-rich sites on organic compounds and initiates complex radical chain reactions in aqueous phase advanced oxidation processes (AOPs). Computer based kinetic modeling requires a reaction pathway generator and predictions of associated reaction rate constants. Previously, we reported a reaction pathway generator that can enumerate the most important elementary reactions for aliphatic compounds. For the reaction rate constant predictor, we develop linear free energy relationships (LFERs) between aqueous phase literature-reported HO(•) reaction rate constants and theoretically calculated free energies of activation for H-atom abstraction from a C-H bond and HO(•) addition to alkenes. The theoretical method uses ab initio quantum mechanical calculations, Gaussian 1-3, for gas phase reactions and a solvation method, COSMO-RS theory, to estimate the impact of water. Theoretically calculated free energies of activation are found to be within approximately ±3 kcal/mol of experimental values. Considering errors that arise from quantum mechanical calculations and experiments, this should be within the acceptable errors. The established LFERs are used to predict the HO(•) reaction rate constants within a factor of 5 from the experimental values. This approach may be applied to other reaction mechanisms to establish a library of rate constant predictions for kinetic modeling of AOPs.
Characterization of a mine fire using atmospheric monitoring system sensor data.
Yuan, L; Thomas, R A; Zhou, L
2017-06-01
Atmospheric monitoring systems (AMS) have been widely used in underground coal mines in the United States for the detection of fire in the belt entry and the monitoring of other ventilation-related parameters such as airflow velocity and methane concentration in specific mine locations. In addition to an AMS being able to detect a mine fire, the AMS data have the potential to provide fire characteristic information such as fire growth - in terms of heat release rate - and exact fire location. Such information is critical in making decisions regarding fire-fighting strategies, underground personnel evacuation and optimal escape routes. In this study, a methodology was developed to calculate the fire heat release rate using AMS sensor data for carbon monoxide concentration, carbon dioxide concentration and airflow velocity based on the theory of heat and species transfer in ventilation airflow. Full-scale mine fire experiments were then conducted in the Pittsburgh Mining Research Division's Safety Research Coal Mine using an AMS with different fire sources. Sensor data collected from the experiments were used to calculate the heat release rates of the fires using this methodology. The calculated heat release rate was compared with the value determined from the mass loss rate of the combustible material using a digital load cell. The experimental results show that the heat release rate of a mine fire can be calculated using AMS sensor data with reasonable accuracy.
Characteristics and verification of a car-borne survey system for dose rates in air: KURAMA-II.
Tsuda, S; Yoshida, T; Tsutsumi, M; Saito, K
2015-01-01
The car-borne survey system KURAMA-II, developed by the Kyoto University Research Reactor Institute, has been used for air dose rate mapping after the Fukushima Dai-ichi Nuclear Power Plant accident. KURAMA-II consists of a CsI(Tl) scintillation detector, a GPS device, and a control device for data processing. The dose rates monitored by KURAMA-II are based on the G(E) function (spectrum-dose conversion operator), which can precisely calculate dose rates from measured pulse-height distribution even if the energy spectrum changes significantly. The characteristics of KURAMA-II have been investigated with particular consideration to the reliability of the calculated G(E) function, dose rate dependence, statistical fluctuation, angular dependence, and energy dependence. The results indicate that 100 units of KURAMA-II systems have acceptable quality for mass monitoring of dose rates in the environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Constant Rate of Spontaneous Mutation in DNA-Based Microbes
NASA Astrophysics Data System (ADS)
Drake, John W.
1991-08-01
In terms of evolution and fitness, the most significant spontaneous mutation rate is likely to be that for the entire genome (or its nonfrivolous fraction). Information is now available to calculate this rate for several DNA-based haploid microbes, including bacteriophages with single- or double-stranded DNA, a bacterium, a yeast, and a filamentous fungus. Their genome sizes vary by ≈6500-fold. Their average mutation rates per base pair vary by ≈16,000-fold, whereas their mutation rates per genome vary by only ≈2.5-fold, apparently randomly, around a mean value of 0.0033 per DNA replication. The average mutation rate per base pair is inversely proportional to genome size. Therefore, a nearly invariant microbial mutation rate appears to have evolved. Because this rate is uniform in such diverse organisms, it is likely to be determined by deep general forces, perhaps by a balance between the usually deleterious effects of mutation and the physiological costs of further reducing mutation rates.
Algorithms to qualify respiratory data collected during the transport of trauma patients.
Chen, Liangyou; McKenna, Thomas; Reisner, Andrew; Reifman, Jaques
2006-09-01
We developed a quality indexing system to numerically qualify respiratory data collected by vital-sign monitors in order to support reliable post-hoc mining of respiratory data. Each monitor-provided (reference) respiratory rate (RR(R)) is evaluated, second-by-second, to quantify the reliability of the rate with a quality index (QI(R)). The quality index is calculated from: (1) a breath identification algorithm that identifies breaths of 'typical' sizes and recalculates the respiratory rate (RR(C)); (2) an evaluation of the respiratory waveform quality (QI(W)) by assessing waveform ambiguities as they impact the calculation of respiratory rates and (3) decision rules that assign a QI(R) based on RR(R), RR(C) and QI(W). RR(C), QI(W) and QI(R) were compared to rates and quality indices independently determined by human experts, with the human measures used as the 'gold standard', for 163 randomly chosen 15 s respiratory waveform samples from our database. The RR(C) more closely matches the rates determined by human evaluation of the waveforms than does the RR(R) (difference of 3.2 +/- 4.6 breaths min(-1) versus 14.3 +/- 19.3 breaths min(-1), mean +/- STD, p < 0.05). Higher QI(W) is found to be associated with smaller differences between calculated and human-evaluated rates (average differences of 1.7 and 8.1 breaths min(-1) for the best and worst QI(W), respectively). Establishment of QI(W) and QI(R), which ranges from 0 for the worst-quality data to 3 for the best, provides a succinct quantitative measure that allows for automatic and systematic selection of respiratory waveforms and rates based on their data quality.
National fire-danger rating system fine-fuel moisture content tablesan Alaskan adaptation.
Richard J. Barney
1969-01-01
Fine-fuel moisture content tables, using dry bulb and dewpoint temperatures as entry data, have been developed for use with the National Fire-Danger Rating System in Alaska. Comparisons have been made which illustrate differences resulting from danger-rating calculations based on these new fine-fuel moisture content tables for the cured, transition, and green...
One-Dimensional Hybrid Satellite Track Model for the Dynamics Explorer 2 (DE 2) Satellite
NASA Technical Reports Server (NTRS)
Deng, Wei; Killeen, T. L.; Burns, A. G.; Johnson, R. M.; Emery, B. A.; Roble, R. G.; Winningham, J. D.; Gary, J. B.
1995-01-01
A one-dimensional hybrid satellite track model has been developed to calculate the high-latitude thermospheric/ionospheric structure below the satellite altitude using Dynamics Explorer 2 (DE 2) satellite measurements and theory. This model is based on Emery et al. satellite track code but also includes elements of Roble et al. global mean thermosphere/ionosphere model. A number of parameterizations and data handling techniques are used to input satellite data from several DE 2 instruments into this model. Profiles of neutral atmospheric densities are determined from the MSIS-90 model and measured neutral temperatures. Measured electron precipitation spectra are used in an auroral model to calculate particle impact ionization rates below the satellite. These rates are combined with a solar ionization rate profile and used to solve the O(+) diffusion equation, with the measured electron density as an upper boundary condition. The calculated O(+) density distribution, as well as the ionization profiles, are then used in a photochemical equilibrium model to calculate the electron and molecular ion densities. The electron temperature is also calculated by solving the electron energy equation with an upper boundary condition determined by the DE 2 measurement. The model enables calculations of altitude profiles of conductivity and Joule beating rate along and below the satellite track. In a first application of the new model, a study is made of thermospheric and ionospheric structure below the DE 2 satellite for a single orbit which occurred on October 25, 1981. The field-aligned Poynting flux, which is independently obtained for this orbit, is compared with the model predictions of the height-integrated energy conversion rate. Good quantitative agreement between these two estimates has been reached. In addition, measurements taken at the incoherent scatter radar site at Chatanika (65.1 deg N, 147.4 deg W) during a DE 2 overflight are compared with the model calculations. A good agreement was found in lower thermospheric conductivities and Joule heating rate.
Direct measurement of neon production rates by (α,n) reactions in minerals
NASA Astrophysics Data System (ADS)
Cox, Stephen E.; Farley, Kenneth A.; Cherniak, Daniele J.
2015-01-01
The production of nucleogenic neon from alpha particle capture by 18O and 19F offers a potential chronometer sensitive to temperatures higher than the more widely used (U-Th)/He chronometer. The accuracy depends on the cross sections and the calculated stopping power for alpha particles in the mineral being studied. Published 18O(α,n)21Ne production rates are in poor agreement and were calculated from contradictory cross sections, and therefore demand experimental verification. Similarly, the stopping powers for alpha particles are calculated from SRIM (Stopping Range of Ions in Matter software) based on a limited experimental dataset. To address these issues we used a particle accelerator to implant alpha particles at precisely known energies into slabs of synthetic quartz (SiO2) and barium tungstate (BaWO4) to measure 21Ne production from capture by 18O. Within experimental uncertainties the observed 21Ne production rates compare favorably to our predictions using published cross sections and stopping powers, indicating that ages calculated using these quantities are accurate at the ∼3% level. In addition, we measured the 22Ne/21Ne ratio and (U-Th)/He and (U-Th)/Ne ages of Durango fluorapatite, which is an important model system for this work because it contains both oxygen and fluorine. Finally, we present 21Ne/4He production rate ratios for a variety of minerals of geochemical interest along with software for calculating neon production rates and (U-Th)/Ne ages.
Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.
Cornforth, David J.; Tarvainen, Mika P.; Jelinek, Herbert F.
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311
Army Manpower Cost System (AMCOS): Active Enlisted Force Prototype
1986-03-01
cost element in both economic and budget models includes both a soldier’s Base Pay and the Service’s FICA contribu- tion at the current tax rate . a...mean base pay for the position calculated from BP T I FCAP - curret maxilum ICA payable FRATE - current FICA tax rate Tlij - total base pay distributed...Group, Santa Monica, 1982. Butler, R. and T. Neches, " HARDMAN Program Manager’s LCC Handbook: Avionics Equip- ments," D-201, The Assessment Group
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Cancer incidence among Arab Americans in California, Detroit, and New Jersey SEER registries.
Bergmans, Rachel; Soliman, Amr S; Ruterbusch, Julie; Meza, Rafael; Hirko, Kelly; Graff, John; Schwartz, Kendra
2014-06-01
We calculated cancer incidence for Arab Americans in California; Detroit, Michigan; and New Jersey, and compared rates with non-Hispanic, non-Arab Whites (NHNAWs); Blacks; and Hispanics. We conducted a study using population-based data. We linked new cancers diagnosed in 2000 from the Surveillance, Epidemiology, and End Results Program (SEER) to an Arab surname database. We used standard SEER definitions and methodology for calculating rates. Population estimates were extracted from the 2000 US Census. We calculated incidence and rate ratios. Arab American men and women had similar incidence rates across the 3 geographic regions, and the rates were comparable to NHNAWs. However, the thyroid cancer rate was elevated among Arab American women compared with NHNAWs, Hispanics, and Blacks. For all sites combined, for prostate and lung cancer, Arab American men had a lower incidence than Blacks and higher incidence than Hispanics in all 3 geographic regions. Arab American male bladder cancer incidence was higher than that in Hispanics and Blacks in these regions. Our results suggested that further research would benefit from the federal recognition of Arab Americans as a specified ethnicity to estimate and address the cancer burden in this growing segment of the population.
42 CFR 425.502 - Calculating the ACO quality performance score.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Patient/care giver experience. (ii) Care coordination/Patient safety. (iii) Preventative health. (iv) At... year. (1) For the first performance year of an ACO's agreement, CMS defines the quality performance... defined by CMS based on national Medicare fee-for-service rates, national MA quality measure rates, or a...
42 CFR 425.502 - Calculating the ACO quality performance score.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Patient/care giver experience. (ii) Care coordination/Patient safety. (iii) Preventative health. (iv) At... year. (1) For the first performance year of an ACO's agreement, CMS defines the quality performance... defined by CMS based on national Medicare fee-for-service rates, national MA quality measure rates, or a...
Fixed-rate layered multicast congestion control
NASA Astrophysics Data System (ADS)
Bing, Zhang; Bing, Yuan; Zengji, Liu
2006-10-01
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.
Toward Hypertension Prediction Based on PPG-Derived HRV Signals: a Feasibility Study.
Lan, Kun-Chan; Raknim, Paweeya; Kao, Wei-Fong; Huang, Jyh-How
2018-04-21
Heart rate variability (HRV) is often used to assess the risk of cardiovascular disease, and data on this can be obtained via electrocardiography (ECG). However, collecting heart rate data via photoplethysmography (PPG) is now a lot easier. We investigate the feasibility of using the PPG-based heart rate to estimate HRV and predict diseases. We obtain three months of PPG-based heart rate data from subjects with and without hypertension, and calculate the HRV based on various forms of time and frequency domain analysis. We then apply a data mining technique to this estimated HRV data, to see if it is possible to correctly identify patients with hypertension. We use six HRV parameters to predict hypertension, and find SDNN has the best predictive power. We show that early disease prediction is possible through collecting one's PPG-based heart rate information.
Fereshtehnejad, Seyed-Mohammad; Shafieesabet, Mahdiyeh; Rahmani, Arash; Delbari, Ahmad; Lökk, Johan
2015-01-01
Parkinsonism occurs in all ethnic groups worldwide; however, there are wide variations in the prevalence rates reported from different countries, even for neighboring regions. The huge socioeconomic burden of parkinsonism necessitates the need for prevalence studies in each country. So far, there is neither data registry nor prevalence information on parkinsonism in the Iranian population. The aim of our study was to estimate the prevalence rate of probable parkinsonism in a huge urban area in Iran, Tehran using a community-based door-to-door survey. We used a random multistage sampling of the households within the network of health centers consisting of 374 subunits in all 22 districts throughout the entire urban area of Tehran. Overall, 20,621 individuals answered the baseline checklist and screening questionnaire and data from 19,500 persons aged ≥30 years were entered in the final analysis. Health care professionals used a new six-item screening questionnaire for parkinsonism, which has been previously shown to have a high validity and diagnostic value in the same population. A total of 157 cases were screened for parkinsonism using the validated six-item questionnaire. After age and sex adjustment based on the Tehran population, the prevalence of parkinsonism was calculated as 222.9 per 100,000. Using the World Health Organization's World Standard Population, the standardized prevalence rate of parkinsonism was 285 per 100,000 (95% confidence interval 240-329). The male:female ratio of probable parkinsonism was calculated as 1.62, and there was a significant increase in the screening rate by advancing age. The calculated rates for the prevalence of parkinsonism in our study are closer to reports from some European and Middle Eastern countries, higher than reports from Eastern Asian and African populations, and lower than Australia. The prevalence rate of >200 in 100,000 for parkinsonism in Tehran, Iran could be considered a medium-to-high rate.
2018-01-01
Our first aim was to compare the anaerobic threshold (AnT) determined by the incremental protocol with the reverse lactate threshold test (RLT), investigating the previous cycling experience effect. Secondarily, an alternative RLT application based on heart rate was proposed. Two groups (12 per group-according to cycling experience) were evaluated on cycle ergometer. The incremental protocol started at 25 W with increments of 25 W at each 3 minutes, and the AnT was calculated by bissegmentation, onset of blood lactate concentration and maximal deviation methods. The RLT was applied in two phases: a) lactate priming segment; and b) reverse segment; the AnT (AnTRLT) was calculated based on a second order polynomial function. The AnT from the RLT was calculated based on the heart rate (AnTRLT-HR) by the second order polynomial function. In regard of the Study 1, most of statistical procedures converged for similarity between the AnT determined from the bissegmentation method and AnTRLT. For 83% of non-experienced and 75% of experienced subjects the bias was 4% and 2%, respectively. In Study 2, no difference was found between the AnTRLT and AnTRLT-HR. For 83% of non-experienced and 91% of experienced subjects, the bias between AnTRLT and AnTRLT-HR was similar (i.e. 6%). In summary, the AnT determined by the incremental protocol and RLT are consistent. The AnT can be determined during the RLT via heart rate, improving its applicability. However, future studies are required to improve the agreement between variables. PMID:29534108
Calculation and manipulation of the chirp rates of high-order harmonics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murakami, M.; Mauritsson, J.; Schafer, K.J.
2005-01-01
We calculate the linear chirp rates of high-order harmonics in argon, generated by intense, 810 nm laser pulses, and explore the dependence of the chirp rate on harmonic order, driving laser intensity, and pulse duration. By using a time-frequency representation of the harmonic fields we can identify several different linear chirp contributions to the plateau harmonics. Our results, which are based on numerical integration of the time-dependent Schroedinger equation, are in good agreement with the adiabatic predictions of the strong field approximation for the chirp rates. Extending the theoretical analysis in the recent paper by Mauritsson et al. [Phys. Rev.more » A 70, 021801(R) (2004)], we also manipulate the chirp rates of the harmonics by adding a chirp to the driving pulse. We show that the chirp rate for harmonic q is given by the sum of the intrinsic chirp rate, which is determined by the new duration and peak intensity of the chirped driving pulse, and q times the external chirp rate.« less
Topin, Jérémie; Diharce, Julien; Fiorucci, Sébastien; Antonczak, Serge; Golebiowski, Jérôme
2014-01-23
Hydrogenases are promising candidates for the catalytic production of green energy by means of biological ways. The major impediment to such a production is rooted in their inhibition under aerobic conditions. In this work, we model dioxygen migration rates in mutants of a hydrogenase of Desulfovibrio fructusovorans. The approach relies on the calculation of the whole potential of mean force for O2 migration within the wild-type as well as in V74M, V74F, and V74Q mutant channels. The three free-energy barriers along the entire migration pathway are converted into chemical rates through modeling based on Transition State Theory. The use of such a model recovers the trend of O2 migration rates among the series.
Future Wave Height Situation estimated by the Latest Climate Scenario around Funafuti Atoll, Tuvalu
NASA Astrophysics Data System (ADS)
Sato, D.; Yokoki, H.; Kuwahara, Y.; Yamano, H.; Kayanne, H.; Okajima, H.; Kawamiya, M.
2012-12-01
Sea-level rise due to the global warming is significant phenomenon to coastal region in the world. Especially the atoll islands, which are low-lying and narrow, have high vulnerability against the sea-level rise. Recently the improved future climate projection (MIROC-ESM) was provided by JAMSTEC, which adopted the latest climate scenarios based on the RCP (Representative Concentration Pathway) of the green house gasses. Wave field simulation including the latest sea-level rise pathway by MIROC-ESM was conducted to understand the change of significant wave heights in Funafuti Atoll, Tuvalu, which was an important factor to manage the coast protection. MIROC-ESM provides monthly sea surface height in the fine gridded world (1.5 degree near the equator). Wave field simulation was conducted using the climate scenario of RCP45 in which the radioactive forcing of the end of 21st century was stabilized to 4.5 W/m2. Sea-level rise ratio of every 10 years was calculated based on the historical data set from 1850 to 2005 and the estimated data set from 2006 to 2100. In that case, the sea-level increases by 10cm after 100 years. In this study, the numerical simulation of wave field at the rate of sea-level rise was carried out using the SWAN model. The wave and wind conditions around Funafuti atoll is characterized by two seasons that are the trade (Apr. - Nov.) and non-trade (Jan. - Mar., Dec.) wind season. Then, we set up the two seasonal boundary conditions for one year's simulation, which were calculated from ECMWF reanalysis data. Simulated results of significant wave heights are analyzed by the increase rate (%) calculated from the base results (Average for 2000 - 2005) and the results of 2100. Calculated increase rate of the significant wave height for both seasons was extremely high on the reef-flat. Maximum increase rates of the trade and non-trade wind season were 1817% and 686%, respectively. The southern part of the atoll has high increasing rate through the two seasons. In the non-trade wind season, the northern tip and the southern part of the island were higher increase rate in the lagoon-side coasts, which was about 7%, and the average rate was 3.4%. On the other hand, the average rate in the trade wind season was 5.0%. Ocean side coast has high increase rate through the two seasons. Especially, the very large rate was calculated in the northern part of the Fongafale Island locally. The DEM data in the middle of Fongafale Island, which is most populated area in the island, showed that the northern oceanic coast has wide and high storm ridge and the increase rate was extremely large there. In such coasts, sea-level rise due to global warming has same effect as storm surge due to tropical cyclone in the point of increasing the sea-level, although the time scale of them is not same. Thus we can consider that the calculated area with large increase rate has already experienced the high wave due to tropical cyclone, which was enabled to construct the wide and high storm ridge. This result indicated that the effective coastal management under the sea-level rise needs to understand not only the quantitative estimation of the future situation but also the protect potential constructed by the present wave and wind condition.
Delivered volumes of enteral nutrition exceed prescribed volumes.
Walker, Renee Nichole; Utech, Anne; Velez, Maria Eugenia; Schwartz, Katie
2014-10-01
Enteral nutrition (EN) provisions are typically calculated based on a 24-hour infusion period. However, feedings are often interrupted for daily activities, procedures, or gastrointestinal intolerance. The study's objective was to determine the delivered EN quantities provided to stable hospitalized patients, using cellular time and measured volumes to verify our EN calculation adjustment. A supply of consecutively numbered ready-to-hang (RTH) EN product was delivered to the bedside of 26 inpatients with established EN tolerance at goal rates on various types of nursing units. The dietitian weighed the volume remaining in the infusing product and recorded the measurement time. On the following days, the dietitian continued to weigh the infusing RTH product and the empty RTH bottles saved by nursing. The primary outcome was the difference between the prescribed and delivered EN provisions, which was calculated with a paired t test. Patients received significantly more calories in the delivered enteral feeding (mean [SD], 1678 [385] kcal) than prescribed calories in the EN order (1489 [246 kcal]; t = 3.736, P = .001), adjusting for observed time. No significant differences were found between nursing units, product, and rate. EN delivered may actually exceed ordered amounts by 5%–21% (mean, 12%) with feeding pump inaccuracy as the primary contributing factor. This differs from what others have found. Our findings support using a volume-based ordering system vs a rate-based ordering system for more accurate EN delivery.
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A model for predicting the distribution of liquid fuel droplets and fuel vapor in premixing-prevaporizing fuel-air mixing passages of the direct injection type is reported. This model consists of three computer programs; a calculation of the two dimensional or axisymmetric air flow field neglecting the effects of fuel; a calculation of the three dimensional fuel droplet trajectories and evaporation rates in a known, moving air flow; a calculation of fuel vapor diffusing into a moving three dimensional air flow with source terms dependent on the droplet evaporation rates. The fuel droplets are treated as individual particle classes each satisfying Newton's law, a heat transfer, and a mass transfer equation. This fuel droplet model treats multicomponent fuels and incorporates the physics required for the treatment of elastic droplet collisions, droplet shattering, droplet coalescence and droplet wall interactions. The vapor diffusion calculation treats three dimensional, gas phase, turbulent diffusion processes. The analysis includes a model for the autoignition of the fuel air mixture based upon the rate of formation of an important intermediate chemical species during the preignition period.
Spectroscopy of reflection-asymmetric nuclei with relativistic energy density functionals
NASA Astrophysics Data System (ADS)
Xia, S. Y.; Tao, H.; Lu, Y.; Li, Z. P.; Nikšić, T.; Vretenar, D.
2017-11-01
Quadrupole and octupole deformation energy surfaces, low-energy excitation spectra, and transition rates in 14 isotopic chains: Xe, Ba, Ce, Nd, Sm, Gd, Rn, Ra, Th, U, Pu, Cm, Cf, and Fm, are systematically analyzed using a theoretical framework based on a quadrupole-octupole collective Hamiltonian (QOCH), with parameters determined by constrained reflection-asymmetric and axially symmetric relativistic mean-field calculations. The microscopic QOCH model based on the PC-PK1 energy density functional and δ -interaction pairing is shown to accurately describe the empirical trend of low-energy quadrupole and octupole collective states, and predicted spectroscopic properties are consistent with recent microscopic calculations based on both relativistic and nonrelativistic energy density functionals. Low-energy negative-parity bands, average octupole deformations, and transition rates show evidence for octupole collectivity in both mass regions, for which a microscopic mechanism is discussed in terms of evolution of single-nucleon orbitals with deformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yong; Zhang, Dong H., E-mail: zhangdh@dicp.ac.cn
2014-11-21
Eight-dimensional (8D) transition-state wave packet simulations have been performed on two latest potential energy surfaces (PES), the Zhou-Fu-Wang-Collins-Zhang (ZFWCZ) PES [Y. Zhou, B. Fu, C. Wang, M. A. Collins, and D. H. Zhang, J. Chem. Phys. 134, 064323 (2011)] and the Xu-Chen-Zhang (XCZ)-neural networks (NN) PES [X. Xu, J. Chen, and D. H. Zhang, Chin. J. Chem. Phys. 27, 373 (2014)]. Reaction rate constants for both the H+CH{sub 4} reaction and the H{sub 2}+CH{sub 3} reaction are calculated. Simulations of the H+CH{sub 4} reaction based on the XCZ-NN PES show that the ZFWCZ PES predicts rate constants with reasonable highmore » accuracy for low temperatures while leads to slightly lower results for high temperatures, in line with the distribution of interpolation error associated with the ZFWCZ PES. The 8D H+CH{sub 4} rate constants derived on the ZFWCZ PES compare well with full-dimensional 12D results based on the equivalent m-ZFWCZ PES, with a maximum relative difference of no more than 20%. Additionally, very good agreement is shown by comparing the 8D XCZ-NN rate constants with the 12D results obtained on the ZFWCZ-WM PES, after considering the difference in static barrier height between these two PESs. The reaction rate constants calculated for the H{sub 2}+CH{sub 3} reaction are found to be in good consistency with experimental observations.« less
Mutual Information Rate and Bounds for It
Baptista, Murilo S.; Rubinger, Rero M.; Viana, Emilson R.; Sartorelli, José C.; Parlitz, Ulrich; Grebogi, Celso
2012-01-01
The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. PMID:23112809
ERIC Educational Resources Information Center
Oregon Department of Education, 2017
2017-01-01
High School graduation rates are key indicators of accountability for high schools and school districts in Oregon. Beginning with the 2008-09 school year, the Oregon Department of Education (ODE) implemented the cohort method of calculating graduation rates. The cohort method identifies the year the student entered high school for the first time…
Long-distance entanglement-based quantum key distribution experiment using practical detectors.
Takesue, Hiroki; Harada, Ken-Ichi; Tamaki, Kiyoshi; Fukuda, Hiroshi; Tsuchizawa, Tai; Watanabe, Toshifumi; Yamada, Koji; Itabashi, Sei-Ichi
2010-08-02
We report an entanglement-based quantum key distribution experiment that we performed over 100 km of optical fiber using a practical source and detectors. We used a silicon-based photon-pair source that generated high-purity time-bin entangled photons, and high-speed single photon detectors based on InGaAs/InP avalanche photodiodes with the sinusoidal gating technique. To calculate the secure key rate, we employed a security proof that validated the use of practical detectors. As a result, we confirmed the successful generation of sifted keys over 100 km of optical fiber with a key rate of 4.8 bit/s and an error rate of 9.1%, with which we can distill secure keys with a key rate of 0.15 bit/s.
McCarthy, M R; Vandegriff, K D; Winslow, R M
2001-08-30
We compared rates of oxygen transport in an in vitro capillary system using red blood cells (RBCs) and cell-free hemoglobins. The axial PO(2) drop down the capillary was calculated using finite-element analysis. RBCs, unmodified hemoglobin (HbA(0)), cross-linked hemoglobin (alpha alpha-Hb) and hemoglobin conjugated to polyethylene-glycol (PEG-Hb) were evaluated. According to their fractional saturation curves, PEG-Hb showed the least desaturation down the capillary, which most closely matched the RBCs; HbA(0) and alpha alpha-Hb showed much greater desaturation. A lumped diffusion parameter, K*, was calculated based on the Fick diffusion equation with a term for facilitated diffusion. The overall rates of oxygen transfer are consistent with hemoglobin diffusion rates according to the Stokes-Einstein Law and with previously measured blood pressure responses in rats. This study provides a conceptual framework for the design of a 'blood substitute' based on mimicking O(2) transport by RBCs to prevent autoregulatory changes in blood flow and pressure.
TrackEtching - A Java based code for etched track profile calculations in SSNTDs
NASA Astrophysics Data System (ADS)
Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.
2017-09-01
A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.
Video-Based Fingerprint Verification
Qin, Wei; Yin, Yilong; Liu, Lili
2013-01-01
Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283
Jason R. Price; Michael A. Velbel; Lina C. Patino
2005-01-01
Rates of clay formation in three watersheds located at the Coweeta Hydrologic Laboratory, western North Carolina, have been determined from solute flux-based mass balance methods. A system of mass balance equations with enough equations and unknowns to allow calculation of secondary mineral formation rates as well as the more commonly determined primary-...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-08
... is above de minimis, we will calculate importer-specific ad valorem duty assessment rate based on the...- average dumping margin is zero or de minimis, or an importer-specific assessment rate is zero or de... final results of this review (except, if the rate is zero or de minimis, then zero cash deposit will be...
20 CFR 10.216 - How is the pay rate for COP calculated?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false How is the pay rate for COP calculated? 10..., AS AMENDED Continuation of Pay Calculation of Cop § 10.216 How is the pay rate for COP calculated? The employer shall calculate COP using the period of time and the weekly pay rate. (a) The pay rate...
Pricing of premiums for equity-linked life insurance based on joint mortality models
NASA Astrophysics Data System (ADS)
Riaman; Parmikanti, K.; Irianingsih, I.; Supian, S.
2018-03-01
Life insurance equity - linked is a financial product that not only offers protection, but also investment. The calculation of equity-linked life insurance premiums generally uses mortality tables. Because of advances in medical technology and reduced birth rates, it appears that the use of mortality tables is less relevant in the calculation of premiums. To overcome this problem, we use a combination mortality model which in this study is determined based on Indonesian Mortality table 2011 to determine the chances of death and survival. In this research, we use the Combined Mortality Model of the Weibull, Inverse-Weibull, and Gompertz Mortality Model. After determining the Combined Mortality Model, simulators calculate the value of the claim to be given and the premium price numerically. By calculating equity-linked life insurance premiums well, it is expected that no party will be disadvantaged due to the inaccuracy of the calculation result
The longitudinal study of turnover and the cost of turnover in emergency medical services.
Patterson, P Daniel; Jones, Cheryl B; Hubble, Michael W; Carr, Matthew; Weaver, Matthew D; Engberg, John; Castle, Nicholas
2010-01-01
Few studies have examined employee turnover and associated costs in emergency medical services (EMS). To quantify the mean annual rate of turnover, total median cost of turnover, and median cost per termination in a diverse sample of EMS agencies. A convenience sample of 40 EMS agencies was followed over a six-month period. Internet, telephone, and on-site data-collection methods were used to document terminations, new hires, open positions, and costs associated with turnover. The cost associated with turnover was calculated based on a modified version of the Nursing Turnover Cost Calculation Methodology (NTCCM). The NTCCM identified direct and indirect costs through a series of questions that agency administrators answered monthly during the study period. A previously tested measure of turnover to calculate the mean annual rate of turnover was used. All calculations were weighted by the size of the EMS agency roster. The mean annual rate of turnover, total median cost of turnover, and median cost per termination were determined for three categories of agency staff mix: all-paid staff, mix of paid and volunteer (mixed) staff, and all-volunteer staff. The overall weighted mean annual rate of turnover was 10.7%. This rate varied slightly across agency staffing mix (all-paid = 10.2%, mixed = 12.3%, all-volunteer = 12.4%). Among agencies that experienced turnover (n = 25), the weighted median cost of turnover was $71,613.75, which varied across agency staffing mix (all-paid = $86,452.05, mixed = $9,766.65, and all-volunteer = $0). The weighted median cost per termination was $6,871.51 and varied across agency staffing mix (all-paid = $7,161.38, mixed = $1,409.64, and all-volunteer = $0). Annual rates of turnover and costs associated with turnover vary widely across types of EMS agencies. The study's mean annual rate of turnover was lower than expected based on information appearing in the news media and EMS trade magazines. Findings provide estimates of two key workforce measures--turnover rates and costs--where previously none have existed. Local EMS directors and policymakers at all levels of government may find the results and study methodology useful toward designing and evaluating programs targeting the EMS workforce.
Flux-tube divergence, coronal heating, and the solar wind
NASA Technical Reports Server (NTRS)
Wang, Y.-M.
1993-01-01
Using model calculations based on a self-consistent treatment of the coronal energy balance, we show how the magnetic flux-tube divergence rate controls the coronal temperature and the properties of the solar wind. For a fixed input of mechanical and Alfven-wave energy at the coronal base, we find that as the divergence rate increases, the maximum coronal temperature decreases but the mass flux leaving the sun gradually increases. As a result, the asymptotic wind speed decreases with increasing expansion factor near the sun, in agreement with empirical studies. As noted earlier by Withbroe, the calculated mass flux at the sun is remarkably insensitive to parameter variations; when combined with magnetohydrodynamic considerations, this self-regulatory property of the model explains the observed constancy of the mass flux at earth.
12 CFR 652.65 - Risk-based capital stress test.
Code of Federal Regulations, 2014 CFR
2014-01-01
... defaulted loans in the data set (20.9 percent). (3) You will calculate losses by multiplying the loss rate...) Data requirements. You will use the following data to implement the risk-based capital stress test. (1) You will use Corporation loan-level data to implement the credit risk component of the risk-based...
12 CFR 652.65 - Risk-based capital stress test.
Code of Federal Regulations, 2013 CFR
2013-01-01
... defaulted loans in the data set (20.9 percent). (3) You will calculate losses by multiplying the loss rate...) Data requirements. You will use the following data to implement the risk-based capital stress test. (1) You will use Corporation loan-level data to implement the credit risk component of the risk-based...
12 CFR 652.65 - Risk-based capital stress test.
Code of Federal Regulations, 2012 CFR
2012-01-01
... defaulted loans in the data set (20.9 percent). (3) You will calculate losses by multiplying the loss rate...) Data requirements. You will use the following data to implement the risk-based capital stress test. (1) You will use Corporation loan-level data to implement the credit risk component of the risk-based...
The mass balance of the ice plain of Ice Stream B and Crary Ice Rise
NASA Technical Reports Server (NTRS)
Bindschadler, Robert
1993-01-01
The region in the mouth of Ice Stream B (the ice plain) and that in the vicinity of Crary Ice Rise are experiencing large and rapid changes. Based on velocity, ice thickness, and accumulation rate data, the patterns of net mass balance in these regions were calculated. Net mass balance, or the rate of ice thickness change, was calculated as the residual of all mass fluxes into and out of subregions (or boxes). Net mass balance provides a measure of the state of health of the ice sheet and clues to the current dynamics.
This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...
Updated methane, non-methane organic gas, and volatile organic compound calculations based on speciation data. Updated speciation and toxic emission rates for new model year 2010 and later heavy-duty diesel engines. Updated particulate matter emission rates for 2004 and later mod...
40 CFR 86.166-12 - Method for calculating emissions due to air conditioning leakage.
Code of Federal Regulations, 2012 CFR
2012-07-01
... determine a refrigerant leakage rate in grams per year from vehicle-based air conditioning units. The... using the following equation: Grams/YRTOT = Grams/YRRP + Grams/YRSP + Grams/YRFH + Grams/YRMC + Grams/YRC Where: Grams/YRTOT = Total air conditioning system emission rate in grams per year and rounded to...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2010 CFR
2010-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2012 CFR
2012-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2011 CFR
2011-07-01
... isokinetic sampling rates prior to a pollutant emission measurement run. The approximation method described... with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its equivalent...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2014 CFR
2014-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
40 CFR Appendix A-3 to Part 60 - Test Methods 4 through 5I
Code of Federal Regulations, 2013 CFR
2013-07-01
... moisture to aid in setting isokinetic sampling rates prior to a pollutant emission measurement run. The... simultaneously with a pollutant emission measurement run. When it is, calculation of percent isokinetic, pollutant emission rate, etc., for the run shall be based upon the results of the reference method or its...
76 FR 12090 - Commission Information Collection Activities (FERC-73); Comment Request; Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-04
... Pipelines Service Life Data'' (OMB No. 1902-0019) is used by the Commission to implement the statutory... depreciation rates, the pipeline companies are required to provide service life data as part of their data submissions if the proposed depreciation rates are based on the remaining physical life calculations. [[Page...
The vibrational dependence of dissociative recombination: Rate constants for N{sub 2}{sup +}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guberman, Steven L., E-mail: slg@sci.org
Dissociative recombination rate constants are reported with electron temperature dependent uncertainties for the lowest 5 vibrational levels of the N{sub 2}{sup +} ground state. The rate constants are determined from ab initio calculations of potential curves, electronic widths, quantum defects, and cross sections. At 100 K electron temperature, the rate constants overlap with the exception of the third vibrational level. At and above 300 K, the rate constants for excited vibrational levels are significantly smaller than that for the ground level. It is shown that any experimentally determined total rate constant at 300 K electron temperature that is smaller thanmore » 2.0 × 10{sup −7} cm{sup 3}/s is likely to be for ions that have a substantially excited vibrational population. Using the vibrational level specific rate constants, the total rate constant is in very good agreement with that for an excited vibrational distribution found in a storage ring experiment. It is also shown that a prior analysis of a laser induced fluorescence experiment is quantitatively flawed due to the need to account for reactions with unknown rate constants. Two prior calculations of the dissociative recombination rate constant are shown to be inconsistent with the cross sections upon which they are based. The rate constants calculated here contribute to the resolution of a 30 year old disagreement between modeled and observed N{sub 2}{sup +} ionospheric densities.« less
Data on inelastic processes in low-energy potassium-hydrogen and rubidium-hydrogen collisions
NASA Astrophysics Data System (ADS)
Yakovleva, S. A.; Barklem, P. S.; Belyaev, A. K.
2018-01-01
Two sets of rate coefficients for low-energy inelastic potassium-hydrogen and rubidium-hydrogen collisions were computed for each collisional system based on two model electronic structure calculations, performed by the quantum asymptotic semi-empirical and the quantum asymptotic linear combinations of atomic orbitals (LCAO) approaches, followed by quantum multichannel calculations for the non-adiabatic nuclear dynamics. The rate coefficients for the charge transfer (mutual neutralization, ion-pair formation), excitation and de-excitation processes are calculated for all transitions between the five lowest lying covalent states and the ionic states for each collisional system for the temperature range 1000-10 000 K. The processes involving higher lying states have extremely low rate coefficients and, hence, are neglected. The two model calculations both single out the same partial processes as having large and moderate rate coefficients. The largest rate coefficients correspond to the mutual neutralization processes into the K(5s 2S) and Rb(4d 2D) final states and at temperature 6000 K have values exceeding 3 × 10-8 cm3 s-1 and 4 × 10-8 cm3 s-1, respectively. It is shown that both the semi-empirical and the LCAO approaches perform equally well on average and that both sets of atomic data have roughly the same accuracy. The processes with large and moderate rate coefficients are likely to be important for non-LTE modelling in atmospheres of F, G and K-stars, especially metal-poor stars.
NASA Astrophysics Data System (ADS)
Bareev, D. D.; Gavrilenko, V. G.; Grach, S. M.; Sergeev, E. N.
2016-02-01
It is shown experimentally that the relaxation time of the stimulated electromagnetic emission (SEE) after the pump wave turn off decreases when frequency of the electromagnetic wave, responsible for the SEE generation (pump wave f0 or diagnostic wave fdw) approaches 4th harmonic of the electron cyclotron frequency fce . Since the SEE relaxation is determined by the damping rate of plasma waves with the same frequency, responsible for the SEE generation, we calculated damping rates of plasma waves with ω ∼ωuh (ω is the plasma wave frequency, ωuh is the upper hybrid frequency) for frequencies close to and distant from the double resonance where ωuh ∼ 4ωce (ωce = 2 πfce). The calculations were performed numerically on the base of linear plasma wave dispersion relation at arbitrary ratio between | Δ | = ω - 4ωce and |k‖ |VTe (VTe is the electron thermal speed and k‖ is the projection of the wave vector onto the magnetic field direction. A comparison of calculation and experimental results has shown that obtained frequency dependence of the SEE decay rate is similar to the damping rate frequency dependence for plasma waves with wave vectors directed at the angles 60-70° to the magnetic field, and gives a strong hint that oblique upper hybrid plasma waves should be responsible for the SEE generation.
Characterization of a mine fire using atmospheric monitoring system sensor data
Yuan, L.; Thomas, R.A.; Zhou, L.
2017-01-01
Atmospheric monitoring systems (AMS) have been widely used in underground coal mines in the United States for the detection of fire in the belt entry and the monitoring of other ventilation-related parameters such as airflow velocity and methane concentration in specific mine locations. In addition to an AMS being able to detect a mine fire, the AMS data have the potential to provide fire characteristic information such as fire growth — in terms of heat release rate — and exact fire location. Such information is critical in making decisions regarding fire-fighting strategies, underground personnel evacuation and optimal escape routes. In this study, a methodology was developed to calculate the fire heat release rate using AMS sensor data for carbon monoxide concentration, carbon dioxide concentration and airflow velocity based on the theory of heat and species transfer in ventilation airflow. Full-scale mine fire experiments were then conducted in the Pittsburgh Mining Research Division’s Safety Research Coal Mine using an AMS with different fire sources. Sensor data collected from the experiments were used to calculate the heat release rates of the fires using this methodology. The calculated heat release rate was compared with the value determined from the mass loss rate of the combustible material using a digital load cell. The experimental results show that the heat release rate of a mine fire can be calculated using AMS sensor data with reasonable accuracy. PMID:28845058
NASA Astrophysics Data System (ADS)
Rathod, Maureen L.
Initially 3D FEM simulation of a simplified mixer was used to examine the effect of mixer configuration and operating conditions on dispersive mixing of a non-Newtonian fluid. Horizontal and vertical velocity magnitudes increased with increasing mixer speed, while maximum axial velocity and shear rate were greater with staggered paddles. In contrast, parallel paddles produced an area of efficient dispersive mixing between the center of the paddle and the barrel wall. This study was expanded to encompass the complete nine-paddle mixing section using power-law and Bird-Carreau fluid models. In the center of the mixer, simple shear flow was seen, corresponding with high [special character omitted]. Efficient dispersive mixing appeared near the barrel wall at all flow rates and near the barrel center with parallel paddles. Areas of backflow, improving fluid retention time, occurred with staggered paddles. The Bird-Carreau fluid showed greater influence of paddle motion under the same operating conditions due to the inelastic nature of the fluid. Shear-thinning behavior also resulted in greater maximum shear rate as shearing became easier with decreasing fluid viscosity. Shear rate distributions are frequently calculated, but extension rate calculations have not been made in a complex geometry since Debbaut and Crochet (1988) defined extension rate as the ratio of the third to the second invariant of the strain rate tensor. Extension rate was assumed to be negligible in most studies, but here extension rate is shown to be significant. It is possible to calculate maximum stable bubble diameter from capillary number if shear and extension rates in a flow field are known. Extension rate distributions were calculated for Newtonian and non-Newtonian fluids. High extension and shear rates were found in the intermeshing region. Extension is the major influence on critical capillary number and maximum stable bubble diameter, but when extension rate values are low shear rate has a larger impact. Examination of maximum stable bubble diameter through the mixer predicted areas of higher bubble dispersion based on flow type. This research has advanced simulation of non-Newtonian fluid and shown that direct calculation of extension rate is possible, demonstrating the effect of extension rate on bubble break-up.
Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-06-21
Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.
20 CFR 10.216 - How is the pay rate for COP calculated?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true How is the pay rate for COP calculated? 10.216... AMENDED Continuation of Pay Calculation of Cop § 10.216 How is the pay rate for COP calculated? The employer shall calculate COP using the period of time and the weekly pay rate. (a) The pay rate for COP...
20 CFR 10.216 - How is the pay rate for COP calculated?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true How is the pay rate for COP calculated? 10.216... AMENDED Continuation of Pay Calculation of Cop § 10.216 How is the pay rate for COP calculated? The employer shall calculate COP using the period of time and the weekly pay rate. (a) The pay rate for COP...
Parsons, T.
2002-01-01
The M = 7.8 1906 San Francisco earthquake cast a stress shadow across the San Andreas fault system, inhibiting other large earthquakes for at least 75 years. The duration of the stress shadow is a key question in San Francisco Bay area seismic hazard assessment. This study presents a three-dimensional (3-D) finite element simulation of post-1906 stress recovery. The model reproduces observed geologic slip rates on major strike-slip faults and produces surface velocity vectors comparable to geodetic measurements. Fault stressing rates calculated with the finite element model are evaluated against numbers calculated using deep dislocation slip. In the finite element model, tectonic stressing is distributed throughout the crust and upper mantle, whereas tectonic stressing calculated with dislocations is focused mostly on faults. In addition, the finite element model incorporates postseismic effects such as deep afterslip and viscoelastic relaxation in the upper mantle. More distributed stressing and postseismic effects in the finite element model lead to lower calculated tectonic stressing rates and longer stress shadow durations (17-74 years compared with 7-54 years). All models considered indicate that the 1906 stress shadow was completely erased by tectonic loading no later than 1980. However, the stress shadow still affects present-day earthquake probability. Use of stressing rate parameters calculated with the finite element model yields a 7-12% reduction in 30-year probability caused by the 1906 stress shadow as compared with calculations not incorporating interactions. The aggregate interaction-based probability on selected segments (not including the ruptured San Andreas fault) is 53-70% versus the noninteraction range of 65-77%.
Statistical dielectronic recombination rates for multielectron ions in plasma
NASA Astrophysics Data System (ADS)
Demura, A. V.; Leont'iev, D. S.; Lisitsa, V. S.; Shurygin, V. A.
2017-10-01
We describe the general analytic derivation of the dielectronic recombination (DR) rate coefficient for multielectron ions in a plasma based on the statistical theory of an atom in terms of the spatial distribution of the atomic electron density. The dielectronic recombination rates for complex multielectron tungsten ions are calculated numerically in a wide range of variation of the plasma temperature, which is important for modern nuclear fusion studies. The results of statistical theory are compared with the data obtained using level-by-level codes ADPAK, FAC, HULLAC, and experimental results. We consider different statistical DR models based on the Thomas-Fermi distribution, viz., integral and differential with respect to the orbital angular momenta of the ion core and the trapped electron, as well as the Rost model, which is an analog of the Frank-Condon model as applied to atomic structures. In view of its universality and relative simplicity, the statistical approach can be used for obtaining express estimates of the dielectronic recombination rate coefficients in complex calculations of the parameters of the thermonuclear plasmas. The application of statistical methods also provides information for the dielectronic recombination rates with much smaller computer time expenditures as compared to available level-by-level codes.
A broad-group cross-section library based on ENDF/B-VII.0 for fast neutron dosimetry Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpan, F.A.
2011-07-01
A new ENDF/B-VII.0-based coupled 44-neutron, 20-gamma-ray-group cross-section library was developed to investigate the latest evaluated nuclear data file (ENDF) ,in comparison to ENDF/B-VI.3 used in BUGLE-96, as well as to generate an objective-specific library. The objectives selected for this work consisted of dosimetry calculations for in-vessel and ex-vessel reactor locations, iron atom displacement calculations for reactor internals and pressure vessel, and {sup 58}Ni(n,{gamma}) calculation that is important for gas generation in the baffle plate. The new library was generated based on the contribution and point-wise cross-section-driven (CPXSD) methodology and was applied to one of the most widely used benchmarks, themore » Oak Ridge National Laboratory Pool Critical Assembly benchmark problem. In addition to the new library, BUGLE-96 and an ENDF/B-VII.0-based coupled 47-neutron, 20-gamma-ray-group cross-section library was generated and used with both SNLRML and IRDF dosimetry cross sections to compute reaction rates. All reaction rates computed by the multigroup libraries are within {+-} 20 % of measurement data and meet the U. S. Nuclear Regulatory Commission acceptance criterion for reactor vessel neutron exposure evaluations specified in Regulatory Guide 1.190. (authors)« less
Effectiveness of a computer based medication calculation education and testing programme for nurses.
Sherriff, Karen; Burston, Sarah; Wallis, Marianne
2012-01-01
The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Measurement and simulation of thermal neutron flux distribution in the RTP core
NASA Astrophysics Data System (ADS)
Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.
2018-01-01
The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longfellow, B.; Gade, A.; Brown, B. A.
Energy levels and branching ratios for the rp-process nucleus 25Si were determined from the reactions 9Be ( 26Si, 25Si) X and 9Be ( 25Al, 25Si) X using in-beam γ-ray spectroscopy with both high-efficiency and high-resolution detector arrays. Proton-unbound states at 3695(14) and 3802(11) keV were identified and assigned tentative spins and parities based on comparison to theory and the mirror nucleus. The 24Al (p, γ) 25Si reaction rate was calculated using the experimental states and states from charge-dependent USDA and USDB shell-model calculations with downward shifts of the 1s 1/2 proton orbital to account for the observed Thomas-Ehrman shift, leadingmore » to a factor of 10–100 increase in rate for the temperature region of 0.22 GK as compared to a previous calculation. These shifts may be applicable to neighboring nuclei, impacting the proton capture rates in this region of the chart.« less
Longfellow, B.; Gade, A.; Brown, B. A.; ...
2018-05-04
Energy levels and branching ratios for the rp-process nucleus 25Si were determined from the reactions 9Be ( 26Si, 25Si) X and 9Be ( 25Al, 25Si) X using in-beam γ-ray spectroscopy with both high-efficiency and high-resolution detector arrays. Proton-unbound states at 3695(14) and 3802(11) keV were identified and assigned tentative spins and parities based on comparison to theory and the mirror nucleus. The 24Al (p, γ) 25Si reaction rate was calculated using the experimental states and states from charge-dependent USDA and USDB shell-model calculations with downward shifts of the 1s 1/2 proton orbital to account for the observed Thomas-Ehrman shift, leadingmore » to a factor of 10–100 increase in rate for the temperature region of 0.22 GK as compared to a previous calculation. These shifts may be applicable to neighboring nuclei, impacting the proton capture rates in this region of the chart.« less
Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations
NASA Astrophysics Data System (ADS)
Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET
2017-09-01
The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oktamuliani, Sri, E-mail: srioktamuliani@ymail.com; Su’ud, Zaki, E-mail: szaki@fi.itb.ac.id
A preliminary study designs SPINNOR (Small Power Reactor, Indonesia, No On-Site Refueling) liquid metal Pb-Bi cooled fast reactors, fuel (U, Pu)N, 150 MWth have been performed. Neutronic calculation uses SRAC which is designed cylindrical core 2D (R-Z) 90 × 135 cm, on the core fuel composed of heterogeneous with percentage difference of PuN 10, 12, 13% and the result of calculation is effective neutron multiplication 1.0488. Power density distribution of the output SRAC is generated for thermal hydraulic calculation using Delphi based on Pascal language that have been developed. The research designed a reactor that is capable of natural circulation atmore » inlet temperature 300 °C with variation of total mass flow rate. Total mass flow rate affect pressure drop and temperature outlet of the reactor core. The greater the total mass flow rate, the smaller the outlet temperature, but increase the pressure drop so that the chimney needed more higher to achieve natural circulation or condition of the system does not require a pump. Optimization of the total mass flow rate produces optimal reactor design on the total mass flow rate of 5000 kg/s with outlet temperature 524,843 °C but require a chimney of 6,69 meters.« less
Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin
2013-12-15
The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.
Li, Jun; Guo, Hua
2018-03-15
Thermal rate coefficients for the title reaction and its various isotopologues are computed using a tunneling-corrected transition-state theory on a global potential energy surface recently developed by fitting a large number of high-level ab initio points. The calculated rate coefficients are found to agree well with the measured ones in a wide temperature range, validating the accuracy of the potential energy surface. Strong non-Arrhenius effects are found at low temperatures. In addition, the calculations reproduced the primary and secondary kinetic isotope effects. These results confirm the strong influence of tunneling to this heavy-light-heavy hydrogen abstraction reaction.
Ouk, Chanda-Malis; Zvereva-Loëte, Natalia; Scribano, Yohann; Bussery-Honvault, Béatrice
2012-10-30
Multireference single and double configuration interaction (MRCI) calculations including Davidson (+Q) or Pople (+P) corrections have been conducted in this work for the reactants, products, and extrema of the doublet ground state potential energy surface involved in the N((2)D) + CH(4) reaction. Such highly correlated ab initio calculations are then compared with previous PMP4, CCSD(T), W1, and DFT/B3LYP studies. Large relative differences are observed in particular for the transition state in the entrance channel resolving the disagreement between previous ab initio calculations. We confirm the existence of a small but positive potential barrier (3.86 ± 0.84 kJ mol(-1) (MR-AQCC) and 3.89 kJ mol(-1) (MRCI+P)) in the entrance channel of the title reaction. The correlation is seen to change significantly the energetic position of the two minima and five saddle points of this system together with the dissociation channels but not their relative order. The influence of the electronic correlation into the energetic of the system is clearly demonstrated by the thermal rate constant evaluation and it temperature dependance by means of the transition state theory. Indeed, only MRCI values are able to reproduce the experimental rate constant of the title reaction and its behavior with temperature. Similarly, product branching ratios, evaluated by means of unimolecular RRKM theory, confirm the NH production of Umemoto et al., whereas previous works based on less accurate ab initio calculations failed. We confirm the previous findings that the N((2)D) + CH(4) reaction proceeds via an insertion-dissociation mechanism and that the dominant product channels are CH(2)NH + H and CH(3) + NH. Copyright © 2012 Wiley Periodicals, Inc.
Using the global positioning satellite system to determine attitude rates using doppler effects
NASA Technical Reports Server (NTRS)
Campbell, Charles E. (Inventor)
2003-01-01
In the absence of a gyroscope, the attitude and attitude rate of a receiver can be determined using signals received by antennae on the receiver. Based on the signals received by the antennae, the Doppler difference between the signals is calculated. The Doppler difference may then be used to determine the attitude rate. With signals received from two signal sources by three antennae pairs, the three-dimensional attitude rate is determined.
Williams, Denita; Castleman, Jennifer; Lee, Chi-Ching; Mote, Beth; Smith, Mary Alice
2009-11-01
One-third of the annual cases of listeriosis in the United States occur during pregnancy and can lead to miscarriage or stillbirth, premature delivery, or infection of the newborn. Previous risk assessments completed by the Food and Drug Administration/the Food Safety Inspection Service of the U.S. Department of Agriculture/the Centers for Disease Control and Prevention (FDA/USDA/CDC) and Food and Agricultural Organization/the World Health Organization (FAO/WHO) were based on dose-response data from mice. Recent animal studies using nonhuman primates and guinea pigs have both estimated LD(50)s of approximately 10(7) Listeria monocytogenes colony forming units (cfu). The FAO/WHO estimated a human LD(50) of 1.9 x 10(6) cfu based on data from a pregnant woman consuming contaminated soft cheese. We reevaluated risk based on dose-response curves from pregnant rhesus monkeys and guinea pigs. Using standard risk assessment methodology including hazard identification, exposure assessment, hazard characterization, and risk characterization, risk was calculated based on the new dose-response information. To compare models, we looked at mortality rate per serving at predicted doses ranging from 10(-4) to 10(12) L. monocytogenes cfu. Based on a serving of 10(6) L. monocytogenes cfu, the primate model predicts a death rate of 5.9 x 10(-1) compared to the FDA/USDA/CDC (fig. IV-12) predicted rate of 1.3 x 10(-7). Based on the guinea pig and primate models, the mortality rate calculated by the FDA/USDA/CDC is underestimated for this susceptible population.
NASA Astrophysics Data System (ADS)
El, Andrej; Muronga, Azwinndini; Xu, Zhe; Greiner, Carsten
2010-12-01
Relativistic dissipative hydrodynamic equations are extended by taking into account particle number changing processes in a gluon system, which expands in one dimension boost-invariantly. Chemical equilibration is treated by a rate equation for the particle number density based on Boltzmann equation and Grad's ansatz for the off-equilibrium particle phase space distribution. We find that not only the particle production, but also the temperature and the momentum spectra of the gluon system, obtained from the hydrodynamic calculations, are sensitive to the rates of particle number changing processes. Comparisons of the hydrodynamic calculations with the transport ones employing the parton cascade BAMPS show the inaccuracy of the rate equation at large shear viscosity to entropy density ratio. To improve the rate equation, Grad's ansatz has to be modified beyond the second moments in momentum.
Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J
2018-03-01
Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.
This work describes the development of a physiologically based pharmacokinetic (PBPK) model of deltamethrin, a type II pyrethroid, in the developing male Sprague-Dawley rat. Generalized Michaelis-Menten equations were used to calculate metabolic rate constants and organ weights ...
DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2015-01-01
Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827
Computation of infrared cooling rates in the water vapor bands
NASA Technical Reports Server (NTRS)
Chou, M. D.; Arking, A.
1978-01-01
A fast but accurate method for calculating the infrared radiative terms due to water vapor has been developed. It makes use of the far wing approximation to scale transmission along an inhomogeneous path to an equivalent homogeneous path. Rather than using standard conditions for scaling, the reference temperatures and pressures are chosen in this study to correspond to the regions where cooling is most significant. This greatly increased the accuracy of the new method. Compared to line by line calculations, the new method has errors up to 4% of the maximum cooling rate, while a commonly used method based upon the Goody band model (Rodgers and Walshaw, 1966) introduces errors up to 11%. The effect of temperature dependence of transmittance has also been evaluated; the cooling rate errors range up to 11% when the temperature dependence is ignored. In addition to being more accurate, the new method is much faster than those based upon the Goody band model.
Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G
Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.
Freeman, Vincent L; Boylan, Emma E; Pugach, Oksana; Mclafferty, Sara L; Tossas-Milligan, Katherine Y; Watson, Karriem S; Winn, Robert A
2017-10-01
To address locally relevant cancer-related health issues, health departments frequently need data beyond that contained in standard census area-based statistics. We describe a geographic information system-based method for calculating age-standardized cancer incidence rates in non-census defined geographical areas using publically available data. Aggregated records of cancer cases diagnosed from 2009 through 2013 in each of Chicago's 77 census-defined community areas were obtained from the Illinois State Cancer Registry. Areal interpolation through dasymetric mapping of census blocks was used to redistribute populations and case counts from community areas to Chicago's 50 politically defined aldermanic wards, and ward-level age-standardized 5-year cumulative incidence rates were calculated. Potential errors in redistributing populations between geographies were limited to <1.5% of the total population, and agreement between our ward population estimates and those from a frequently cited reference set of estimates was high (Pearson correlation r = 0.99, mean difference = -4 persons). A map overlay of safety-net primary care clinic locations and ward-level incidence rates for advanced-staged cancers revealed potential pathways for prevention. Areal interpolation through dasymetric mapping can estimate cancer rates in non-census defined geographies. This can address gaps in local cancer-related health data, inform health resource advocacy, and guide community-centered cancer prevention and control.
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation.
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-28
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ϵ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation
NASA Astrophysics Data System (ADS)
Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna
2018-04-01
We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ɛ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].
Axisymmetric computational fluid dynamics analysis of a film/dump-cooled rocket nozzle plume
NASA Technical Reports Server (NTRS)
Tucker, P. K.; Warsi, S. A.
1993-01-01
Prediction of convective base heating rates for a new launch vehicle presents significant challenges to analysts concerned with base environments. The present effort seeks to augment classical base heating scaling techniques via a detailed investigation of the exhaust plume shear layer of a single H2/O2 Space Transportation Main Engine (STME). Use of fuel-rich turbine exhaust to cool the STME nozzle presented concerns regarding potential recirculation of these gases to the base region with attendant increase in the base heating rate. A pressure-based full Navier-Stokes computational fluid dynamics (CFD) code with finite rate chemistry is used to predict plumes for vehicle altitudes of 10 kft and 50 kft. Levels of combustible species within the plume shear layers are calculated in order to assess assumptions made in the base heating analysis.
Source terms, shielding calculations and soil activation for a medical cyclotron.
Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E
2016-12-01
Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .
Qualification of APOLLO2 BWR calculation scheme on the BASALA mock-up
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaglio-Gaudard, C.; Santamarina, A.; Sargeni, A.
2006-07-01
A new neutronic APOLLO2/MOC/SHEM/CEA2005 calculation scheme for BWR applications has been developed by the French 'Commissariat a l'Energie Atomique'. This scheme is based on the latest calculation methodology (accurate mutual and self-shielding formalism, MOC treatment of the transport equation) and the recent JEFF3.1 nuclear data library. This paper presents the experimental validation of this new calculation scheme on the BASALA BWR mock-up The BASALA programme is devoted to the measurements of the physical parameters of high moderation 100% MOX BWR cores, in hot and cold conditions. The experimental validation of the calculation scheme deals with core reactivity, fission rate maps,more » reactivity worth of void and absorbers (cruciform control blades and Gd pins), as well as temperature coefficient. Results of the analysis using APOLLO2/MOC/SHEM/CEA2005 show an overestimation of the core reactivity by 600 pcm for BASALA-Hot and 750 pcm for BASALA-Cold. Reactivity worth of gadolinium poison pins and hafnium or B{sub 4}C control blades are predicted by APOLLO2 calculation within 2% accuracy. Furthermore, the radial power map is well predicted for every core configuration, including Void configuration and Hf / B{sub 4}C configurations: fission rates in the central assembly are calculated within the {+-}2% experimental uncertainty for the reference cores. The C/E bias on the isothermal Moderator Temperature Coefficient, using the CEA2005 library based on JEFF3.1 file, amounts to -1.7{+-}03 pcm/ deg. C on the range 10 deg. C-80 deg. C. (authors)« less
A New Seismic Hazard Model for Mainland China
NASA Astrophysics Data System (ADS)
Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.
2017-12-01
We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.
Brudnik, Katarzyna; Twarda, Maria; Sarzyński, Dariusz; Jodkowski, Jerzy T
2013-10-01
Ab initio calculations at the G3 level were used in a theoretical description of the kinetics and mechanism of the chlorine abstraction reactions from mono-, di-, tri- and tetra-chloromethane by chlorine atoms. The calculated profiles of the potential energy surface of the reaction systems show that the mechanism of the studied reactions is complex and the Cl-abstraction proceeds via the formation of intermediate complexes. The multi-step reaction mechanism consists of two elementary steps in the case of CCl4 + Cl, and three for the other reactions. Rate constants were calculated using the theoretical method based on the RRKM theory and the simplified version of the statistical adiabatic channel model. The temperature dependencies of the calculated rate constants can be expressed, in temperature range of 200-3,000 K as [Formula: see text]. The rate constants for the reverse reactions CH3/CH2Cl/CHCl2/CCl3 + Cl2 were calculated via the equilibrium constants derived theoretically. The kinetic equations [Formula: see text] allow a very good description of the reaction kinetics. The derived expressions are a substantial supplement to the kinetic data necessary to describe and model the complex gas-phase reactions of importance in combustion and atmospheric chemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paratte, J.M.; Pelloni, S.; Grimm, P.
1991-04-01
This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.
NASA Astrophysics Data System (ADS)
Yang, X.; Castleman, A. W., Jr.
1990-08-01
The kinetics and mechanisms of the reactions of Na+ṡ(X)n=0-3, X=water, ammonia, and methanol with CH3CN, CH3COCH3, CH3CHO, CH3COOH, CH3COOCH3, NH3, CH3OH, and CH3-O-C2H4-O-CH3(DMOE) were studied at ambient temperature under different pressures. All of the switching (substitution) reactions proceed at near-collision rate and show little dependence on the flow tube pressure, the nature and size of the ligand, or the type of core ions. Interestingly, all of the measured rate constants agree well with predictions based on the parametrized trajectory calculations of Su and Chesnavich [J. Chem. Phys. 76, 5183 (1982)]. The reactions of the bare sodium ion with all neutrals proceed via a three-body association mechanism and the measured rate constants cover a large range from a slow association reaction with NH3, to a near-collision rate with DMOE. The lifetimes and the dissociation rate constants of the intermediate complexes deduced using the parametrized trajectory results, combined with the experimentally determined rates, compare fairly well with predictions based on RRKM theory. The calculations also account for the large isotope effect observed for the clustering of ND3 and NH3 to Na+.
AN ESTIMATION OF THE EXPOSURE OF THE POPULATION OF ISRAEL TO NATURAL SOURCES OF IONIZING RADIATION.
Epstein, L; Koch, J; Riemer, T; Haquin, G; Orion, I
2017-11-01
The radiation dose to the population of Israel due to exposure to natural sources of ionizing radiation was assessed. The main contributor to the dose is radon that accounts for 60% of the exposure to natural sources. The dose due to radon inhalation was assessed by combining the results of a radon survey in single-family houses with the results of a survey in apartments in multi-storey buildings. The average annual dose due to radon inhalation was found to be 1.2 mSv. The dose rate due to exposure to cosmic radiation was assessed using a code that calculates the dose rate at different heights above sea level, taking into account the solar cycle. The annual dose was calculated based on the fraction of time spent indoors and the attenuation provided by buildings and was found to be 0.2 mSv. The annual dose due to external exposure to the terrestrial radionuclides was similarly assessed. The indoor dose rate was calculated using a model that takes into account the concentrations of the natural radionuclides in building materials, the density and the thickness of the walls. The dose rate outdoors was calculated based on the concentrations of the natural radionuclides in different geological units in Israel as measured in an aerial survey and measurements above ground. The annual dose was found to be 0.2 mSv. Doses due to internal exposure other than exposure to radon were also calculated and were found to be 0.4 mSv. The overall annual exposure of the population of Israel to natural sources of ionizing radiation is therefore 2 mSv and ranges between 1.7 and 2.7 mSv. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evaluating user reputation in online rating systems via an iterative group-based ranking method
NASA Astrophysics Data System (ADS)
Gao, Jian; Zhou, Tao
2017-05-01
Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.
NASA Astrophysics Data System (ADS)
Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun
2018-04-01
A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.
Cancer Incidence Among Arab Americans in California, Detroit, and New Jersey SEER Registries
Bergmans, Rachel; Ruterbusch, Julie; Meza, Rafael; Hirko, Kelly; Graff, John; Schwartz, Kendra
2014-01-01
Objectives. We calculated cancer incidence for Arab Americans in California; Detroit, Michigan; and New Jersey, and compared rates with non-Hispanic, non-Arab Whites (NHNAWs); Blacks; and Hispanics. Methods. We conducted a study using population-based data. We linked new cancers diagnosed in 2000 from the Surveillance, Epidemiology, and End Results Program (SEER) to an Arab surname database. We used standard SEER definitions and methodology for calculating rates. Population estimates were extracted from the 2000 US Census. We calculated incidence and rate ratios. Results. Arab American men and women had similar incidence rates across the 3 geographic regions, and the rates were comparable to NHNAWs. However, the thyroid cancer rate was elevated among Arab American women compared with NHNAWs, Hispanics, and Blacks. For all sites combined, for prostate and lung cancer, Arab American men had a lower incidence than Blacks and higher incidence than Hispanics in all 3 geographic regions. Arab American male bladder cancer incidence was higher than that in Hispanics and Blacks in these regions. Conclusions. Our results suggested that further research would benefit from the federal recognition of Arab Americans as a specified ethnicity to estimate and address the cancer burden in this growing segment of the population. PMID:24825237
NASA Astrophysics Data System (ADS)
Wang, Hongyan; Li, Qiangzi; Du, Xin; Zhao, Longcai
2017-12-01
In the karst regions of southwest China, rocky desertification is one of the most serious problems in land degradation. The bedrock exposure rate is an important index to assess the degree of rocky desertification in karst regions. Because of the inherent merits of macro-scale, frequency, efficiency, and synthesis, remote sensing is a promising method to monitor and assess karst rocky desertification on a large scale. However, actual measurement of the bedrock exposure rate is difficult and existing remote-sensing methods cannot directly be exploited to extract the bedrock exposure rate owing to the high complexity and heterogeneity of karst environments. Therefore, using unmanned aerial vehicle (UAV) and Landsat-8 Operational Land Imager (OLI) data for Xingren County, Guizhou Province, quantitative extraction of the bedrock exposure rate based on multi-scale remote-sensing data was developed. Firstly, we used an object-oriented method to carry out accurate classification of UAVimages. From the results of rock extraction, the bedrock exposure rate was calculated at the 30 m grid scale. Parts of the calculated samples were used as training data; other data were used for model validation. Secondly, in each grid the band reflectivity of Landsat-8 OLI data was extracted and a variety of rock and vegetation indexes (e.g., NDVI and SAVI) were calculated. Finally, a network model was established to extract the bedrock exposure rate. The correlation coefficient of the network model was 0.855, that of the validation model was 0.677 and the root mean square error of the validation model was 0.073. This method is valuable for wide-scale estimation of bedrock exposure rate in karst environments. Using the quantitative inversion model, a distribution map of the bedrock exposure rate in Xingren County was obtained.
Updated Global Burden of Cholera in Endemic Countries
Ali, Mohammad; Nelson, Allyson R.; Lopez, Anna Lena; Sack, David A.
2015-01-01
Background The global burden of cholera is largely unknown because the majority of cases are not reported. The low reporting can be attributed to limited capacity of epidemiological surveillance and laboratories, as well as social, political, and economic disincentives for reporting. We previously estimated 2.8 million cases and 91,000 deaths annually due to cholera in 51 endemic countries. A major limitation in our previous estimate was that the endemic and non-endemic countries were defined based on the countries’ reported cholera cases. We overcame the limitation with the use of a spatial modelling technique in defining endemic countries, and accordingly updated the estimates of the global burden of cholera. Methods/Principal Findings Countries were classified as cholera endemic, cholera non-endemic, or cholera-free based on whether a spatial regression model predicted an incidence rate over a certain threshold in at least three of five years (2008-2012). The at-risk populations were calculated for each country based on the percent of the country without sustainable access to improved sanitation facilities. Incidence rates from population-based published studies were used to calculate the estimated annual number of cases in endemic countries. The number of annual cholera deaths was calculated using inverse variance-weighted average case-fatality rate (CFRs) from literature-based CFR estimates. We found that approximately 1.3 billion people are at risk for cholera in endemic countries. An estimated 2.86 million cholera cases (uncertainty range: 1.3m-4.0m) occur annually in endemic countries. Among these cases, there are an estimated 95,000 deaths (uncertainty range: 21,000-143,000). Conclusion/Significance The global burden of cholera remains high. Sub-Saharan Africa accounts for the majority of this burden. Our findings can inform programmatic decision-making for cholera control. PMID:26043000
First measurement of 30S+α resonant elastic scattering for the 30S(α ,p ) reaction rate
NASA Astrophysics Data System (ADS)
Kahl, D.; Yamaguchi, H.; Kubono, S.; Chen, A. A.; Parikh, A.; Binh, D. N.; Chen, J.; Cherubini, S.; Duy, N. N.; Hashimoto, T.; Hayakawa, S.; Iwasa, N.; Jung, H. S.; Kato, S.; Kwon, Y. K.; Nishimura, S.; Ota, S.; Setoodehnia, K.; Teranishi, T.; Tokieda, H.; Yamada, T.; Yun, C. C.; Zhang, L. Y.
2018-01-01
Background: Type I x-ray bursts are the most frequently observed thermonuclear explosions in the galaxy, resulting from thermonuclear runaway on the surface of an accreting neutron star. The 30S(α ,p ) reaction plays a critical role in burst models, yet insufficient experimental information is available to calculate a reliable, precise rate for this reaction. Purpose: Our measurement was conducted to search for states in 34Ar and determine their quantum properties. In particular, natural-parity states with large α -decay partial widths should dominate the stellar reaction rate. Method: We performed the first measurement of 30S+α resonant elastic scattering up to a center-of-mass energy of 5.5 MeV using a radioactive ion beam. The experiment utilized a thick gaseous active target system and silicon detector array in inverse kinematics. Results: We obtained an excitation function for 30S(α ,α ) near 150∘ in the center-of-mass frame. The experimental data were analyzed with R -matrix calculations, and we observed three new resonant patterns between 11.1 and 12.1 MeV, extracting their properties of resonance energy, widths, spin, and parity. Conclusions: We calculated the resonant thermonuclear reaction rate of 30S(α ,p ) based on all available experimental data of 34Ar and found an upper limit about one order of magnitude larger than a rate determined using a statistical model. The astrophysical impact of these two rates has been investigated through one-zone postprocessing type I x-ray burst calculations. We find that our new upper limit for the 30S(α ,p )33Cl rate significantly affects the predicted nuclear energy generation rate during the burst.
Identifying online user reputation in terms of user preference
NASA Astrophysics Data System (ADS)
Dai, Lu; Guo, Qiang; Liu, Xiao-Lu; Liu, Jian-Guo; Zhang, Yi-Cheng
2018-03-01
Identifying online user reputation is significant for online social systems. In this paper, taking into account the preference physics of online user collective behaviors, we present an improved group-based rating method for ranking online user reputation based on the user preference (PGR). All the ratings given by each specific user are mapped to the same rating criteria. By grouping users according to their mapped ratings, the online user reputation is calculated based on the corresponding group sizes. Results for MovieLens and Netflix data sets show that the AUC values of the PGR method can reach 0.9842 (0.9493) and 0.9995 (0.9987) for malicious (random) spammers, respectively, outperforming the results generated by the traditional group-based method, which indicates that the online preference plays an important role for measuring user reputation.
NASA Astrophysics Data System (ADS)
Alvarez, Jose; Massey, Steven; Kalitsov, Alan; Velev, Julian
Nanopore sequencing via transverse current has emerged as a competitive candidate for mapping DNA methylation without needed bisulfite-treatment, fluorescent tag, or PCR amplification. By eliminating the error producing amplification step, long read lengths become feasible, which greatly simplifies the assembly process and reduces the time and the cost inherent in current technologies. However, due to the large error rates of nanopore sequencing, single base resolution has not been reached. A very important source of noise is the intrinsic structural noise in the electric signature of the nucleotide arising from the influence of neighboring nucleotides. In this work we perform calculations of the tunneling current through DNA molecules in nanopores using the non-equilibrium electron transport method within an effective multi-orbital tight-binding model derived from first-principles calculations. We develop a base-calling algorithm accounting for the correlations of the current through neighboring bases, which in principle can reduce the error rate below any desired precision. Using this method we show that we can clearly distinguish DNA methylation and other base modifications based on the reading of the tunneling current.
Power Calculations and Placebo Effect for Future Clinical Trials in Progressive Supranuclear Palsy
Stamelou, Maria; Schöpe, Jakob; Wagenpfeil, Stefan; Ser, Teodoro Del; Bang, Jee; Lobach, Iryna Y.; Luong, Phi; Respondek, Gesine; Oertel, Wolfgang H.; Boxer, Adam L.; Höglinger, Günter U.
2016-01-01
Background Two recent randomized, placebo-controlled trials of putative disease-modifying agents (davunetide, tideglusib) in progressive supranuclear palsy (PSP) failed to show efficacy, but generated data relevant for future trials. Methods We provide sample size calculations based on data collected in 187 PSP patients assigned to placebo in these trials. A placebo effect was calculated. Results The total PSP-Rating Scale required the least number of patients per group (N = 51) to detect a 50% change in the 1-year progression and 39 when including patients with ≤ 5 years disease duration. The Schwab and England Activities of Daily Living required 70 patients per group and was highly correlated with the PSP-Rating Scale. A placebo effect was not detected in these scales. Conclusions We propose the 1-year PSP-Rating Scale score change as the single primary readout in clinical neuroprotective or disease-modifying trials. The Schwab and England Activities of Daily Living could be used as a secondary outcome. PMID:26948290
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
NASA Technical Reports Server (NTRS)
Opila, Elizabeth J.; Smialek, James L.; Robinson, Raymond C.; Fox, Dennis S.; Jacobson, Nathan S.
1998-01-01
In combustion environments, volatilization of SiO2 to Si-O-H(g) species is a critical issue. Available thermochemical data for Si-O-H(g) species were used to calculate boundary layer controlled fluxes from SiO2. Calculated fluxes were compared to volatilization rates Of SiO2 scales grown on SiC which were measured in Part 1 of this paper. Calculated volatilization rates were also compared to those measured in synthetic combustion gas furnace tests. Probable vapor species were identified in both fuel-lean and fuel-rich combustion environments based on the observed pressure, temperature and velocity dependencies as well as the magnitude of the volatility rate. Water vapor is responsible for the degradation of SiO2 in the fuel-lean environment. Silica volatility in fuel-lean combustion environments is attributed primarily to the formation of Si(OH)4(g) with a small contribution of SiO(OH)2(g).
Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir
2010-09-01
Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.
Firefighter Math - a web-based learning tool
Dan Jimenez
2010-01-01
Firefighter Math is a web based interactive resource that was developed to help prepare wildland fire personnel for math based training courses. The website can also be used as a refresher for fire calculations including slope, flame length, relative humidity, flow rates, unit conversion, etc. The website is designed to start with basic math refresher skills and...
Valiev, R R; Cherepanov, V N; Baryshnikov, G V; Sundholm, D
2018-02-28
A method for calculating the rate constants for internal-conversion (k IC ) and intersystem-crossing (k ISC ) processes within the adiabatic and Franck-Condon (FC) approximations is proposed. The applicability of the method is demonstrated by calculation of k IC and k ISC for a set of organic and organometallic compounds with experimentally known spectroscopic properties. The studied molecules were pyrromethene-567 dye, psoralene, hetero[8]circulenes, free-base porphyrin, naphthalene, and larger polyacenes. We also studied fac-Alq 3 and fac-Ir(ppy) 3 , which are important molecules in organic light emitting diodes (OLEDs). The excitation energies were calculated at the multi-configuration quasi-degenerate second-order perturbation theory (XMC-QDPT2) level, which is found to yield excitation energies in good agreement with experimental data. Spin-orbit coupling matrix elements, non-adiabatic coupling matrix elements, Huang-Rhys factors, and vibrational energies were calculated at the time-dependent density functional theory (TDDFT) and complete active space self-consistent field (CASSCF) levels. The computed fluorescence quantum yields for the pyrromethene-567 dye, psoralene, hetero[8]circulenes, fac-Alq 3 and fac-Ir(ppy) 3 agree well with experimental data, whereas for the free-base porphyrin, naphthalene, and the polyacenes, the obtained quantum yields significantly differ from the experimental values, because the FC and adiabatic approximations are not accurate for these molecules.
Schiekirka, Sarah; Anders, Sven; Raupach, Tobias
2014-07-21
Estimating learning outcome from comparative student self-ratings is a reliable and valid method to identify specific strengths and shortcomings in undergraduate medical curricula. However, requiring students to complete two evaluation forms (i.e. one before and one after teaching) might adversely affect response rates. Alternatively, students could be asked to rate their initial performance level retrospectively. This approach might threaten the validity of results due to response shift or effort justification bias. Two consecutive cohorts of medical students enrolled in a six-week cardio-respiratory module were enrolled in this study. In both cohorts, performance gain was estimated for 33 specific learning objectives. In the first cohort, outcomes calculated from ratings provided before (pretest) and after (posttest) teaching were compared to outcomes derived from comparative self-ratings collected after teaching only (thentest and posttest). In the second cohort, only thentests and posttests were used to calculate outcomes, but data collection tools differed with regard to item presentation. In one group, thentest and posttest ratings were obtained sequentially on separate forms while in the other, both ratings were obtained simultaneously for each learning objective. Using thentest ratings to calculate performance gain produced slightly higher values than using true pretest ratings. Direct comparison of then- and posttest ratings also yielded slightly higher performance gain than sequential ratings, but this effect was negligibly small. Given the small effect sizes, using thentests appears to be equivalent to using true pretest ratings. Item presentation in the posttest does not significantly impact on results.
2014-01-01
Background Estimating learning outcome from comparative student self-ratings is a reliable and valid method to identify specific strengths and shortcomings in undergraduate medical curricula. However, requiring students to complete two evaluation forms (i.e. one before and one after teaching) might adversely affect response rates. Alternatively, students could be asked to rate their initial performance level retrospectively. This approach might threaten the validity of results due to response shift or effort justification bias. Methods Two consecutive cohorts of medical students enrolled in a six-week cardio-respiratory module were enrolled in this study. In both cohorts, performance gain was estimated for 33 specific learning objectives. In the first cohort, outcomes calculated from ratings provided before (pretest) and after (posttest) teaching were compared to outcomes derived from comparative self-ratings collected after teaching only (thentest and posttest). In the second cohort, only thentests and posttests were used to calculate outcomes, but data collection tools differed with regard to item presentation. In one group, thentest and posttest ratings were obtained sequentially on separate forms while in the other, both ratings were obtained simultaneously for each learning objective. Results Using thentest ratings to calculate performance gain produced slightly higher values than using true pretest ratings. Direct comparison of then- and posttest ratings also yielded slightly higher performance gain than sequential ratings, but this effect was negligibly small. Conclusions Given the small effect sizes, using thentests appears to be equivalent to using true pretest ratings. Item presentation in the posttest does not significantly impact on results. PMID:25043503
Demographic trends in Claremont California’s street tree population
Natalie S. van Doorn; E. Gregory McPherson
2018-01-01
The aim of this study was to quantify street tree population dynamics in the city of Claremont, CA. A repeated measures survey (2000 and 2014) based on a stratified random sampling approach across size classes and for the most abundant 21 species was analyzed to calculate removal, growth, and replacement planting rates. Demographic rates were estimated using a...
40 CFR 60.50Da - Compliance determination procedures and methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... paragraphs (g)(1) and (2) of this section to calculate emission rates based on electrical output to the grid... of appendix A of this part shall be used to compute the emission rate of PM. (2) For the particular... reduction from fuel pretreatment, percent; and %Rg = Percent reduction by SO2 control system, percent. (2...
Dropout Rates in Texas School Districts: Influences of School Size and Ethnic Group.
ERIC Educational Resources Information Center
Toenjes, Laurence A.
Longitudinal dropout rates (LDR's) for public school students and LDR's of pupil membership by ethnic group based on two Texas Education Agency reports are estimated. LDR's are calculated for the state, by school district size, for the 21 largest districts, and by average high school size. Findings dispel the prevalent perception of the dropout…
75 FR 81887 - Changes in Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-29
...Modified Base (1% annual-chance) Flood Elevations (BFEs) are finalized for the communities listed below. These modified BFEs will be used to calculate flood insurance premium rates for new buildings and their contents.
Effect of wave function on the proton induced L XRP cross sections for 62Sm and 74W
NASA Astrophysics Data System (ADS)
Shehla, Kaur, Rajnish; Kumar, Anil; Puri, Sanjiv
2015-08-01
The Lk(k= 1, α, β, γ) X-ray production cross sections have been calculated for 74W and 62Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared with the measured cross sections reported in the recent compilation to check the reliability of the calculated values.
NASA Astrophysics Data System (ADS)
Hou, Yu; Kowalski, Adam; Schroder, Kjell; Halmstad, Andrew; Olsen, Thomas; Wiener, Richard
2006-05-01
We characterize the strength of chaos in two different regimes of Modified Taylor-Couette flow with Hourglass Geometry: the formation of Taylor Vortices with laminar flow and with turbulent flow. We measure the strength of chaos by calculating the correlation dimension and the Kaplan-Yorke dimension based upon the Lyapunov Exponents of each system. We determine the reliability of our calculations by considering data from a chaotic electronic circuit. In order to predict the behavior of the Modified Taylor-Couette flow system, we employ simulations based upon an idealized Reaction-Diffusion model with a third order non-linearity in the reaction rate. Variation of reaction rate with length corresponds to variation of the effective Reynolds Number along the Taylor-Couette apparatus. We present preliminary results and compare to experimental data.
Towards a novel look on low-frequency climate reconstructions
NASA Astrophysics Data System (ADS)
Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti
2010-05-01
Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.
NASA Astrophysics Data System (ADS)
Dolan, K. A.
2015-12-01
Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. In addition, recent studies suggest that disturbance rates may increase in the future under altered climate and land use scenarios. Thus understanding how vulnerable forested ecosystems are to potential changes in disturbance rates is of high importance. This study calculated the theoretical threshold rate of disturbance for which forest ecosystems could no longer be sustained (λ*) across the Coterminous U.S. using an advanced process based ecosystem model (ED). Published rates of disturbance (λ) in 50 study sites were obtained from the North American Forest Disturbance (NAFD) program. Disturbance distance (λ* - λ) was calculated for each site by differencing the model based threshold under current climate conditions and average observed rates of disturbance over the last quarter century. Preliminary results confirm all sample forest sites have current average rates of disturbance below λ*, but there were interesting patterns in the recorded disturbance distances. In general western sites had much smaller disturbance distances, suggesting higher vulnerability to change, while eastern sites showed larger buffers. Ongoing work is being conducted to assess the vulnerability of these sites in the context of potential future changes by propagating scenarios of future climate and land-use change through the analysis.
CREME96 and Related Error Rate Prediction Methods
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.
2012-01-01
Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and Analysis of Cosmic Ray Effects in Electronics). The Single Event Figure of Merit method was also revised to use the solar minimum galactic cosmic ray spectrum and extended to circular orbits down to 200 km at any inclination. More recently a series of commercial codes was developed by TRAD (Test & Radiations) which includes the OMERE code which calculates single event effects. There are other error rate prediction methods which use Monte Carlo techniques. In this chapter the analytic methods for estimating the environment within spacecraft will be discussed.
Smoking rate and periodontal disease prevalence: 40-year trends in Sweden 1970-2010.
Bergstrom, Jan
2014-10-01
To investigate the relationship between smoking rate and periodontal disease prevalence in Sweden. National smoking rates were found from Swedish National Statistics on smoking habits. Based on smoking rates for the years 1970-2010, periodontal disease prevalence estimates were calculated for the age bracket 40-70 years and smoking-associated relative risks between 2.0 and 20.0. The impact of smoking on the population was estimated according to the concept of population attributable fraction. The age-standardized smoking rate in Sweden declined from 44% in 1970 to 15% in 2010. In parallel with the smoking decline the calculated prevalence estimate of periodontal disease dropped from 26% to 12% assuming a 10-fold smoking-associated relative risk. Even at more moderate magnitudes of the relative risk, e.g. 2-fold or 5-fold, the prevalence decrease was quite tangible, suggesting that the current prevalence in Sweden is about 20-50% of the level 40 years ago. The population attributable fraction, estimating the portion of the disease that would have been avoided in the absence of smoking, was 80% in 1970 and 58% in 2010 at a ten-fold relative risk. Calculated estimates of periodontal disease prevalence are closely related to real changes in smoking rate. As smoking rate drops periodontal disease prevalence will drop. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Study of fatigue crack propagation in Ti-1Al-1Mn based on the calculation of cold work evolution
NASA Astrophysics Data System (ADS)
Plekhov, O. A.; Kostina, A. A.
2017-05-01
The work proposes a numerical method for lifetime assessment for metallic materials based on consideration of energy balance at crack tip. This method is based on the evaluation of the stored energy value per loading cycle. To calculate the stored and dissipated parts of deformation energy an elasto-plastic phenomenological model of energy balance in metals under the deformation and failure processes was proposed. The key point of the model is strain-type internal variable describing the stored energy process. This parameter is introduced based of the statistical description of defect evolution in metals as a second-order tensor and has a meaning of an additional strain due to the initiation and growth of the defects. The fatigue crack rate was calculated in a framework of a stationary crack approach (several loading cycles for every crack length was considered to estimate the energy balance at crack tip). The application of the proposed algorithm is illustrated by the calculation of the lifetime of the Ti-1Al-1Mn compact tension specimen under cyclic loading.
Recent increases in sediment and nutrient accumulation in Bear Lake, Utah/Idaho, USA
Smoak, J.M.; Swarzenski, P.W.
2004-01-01
This study examines historical changes in sediment and nutrient accumulation rates in Bear Lake along the northeastern Utah/Idaho border, USA. Two sediment cores were dated by measuring excess 210Pb activities and applying the constant rate of supply (CRS) dating model. Historical rates of bulk sediment accumulation were calculated based on the ages within the sediment cores. Bulk sediment accumulation rates increased throughout the last 100 years. According to the CRS model, bulk sediment accumulation rates were <25mg cm-2 year-1 prior to 1935. Between 1935 and 1980, bulk sediment accumulation rates increased to approximately 40mg cm -2 year-1. This increase in sediment accumulation probably resulted from the re-connection of Bear River to Bear Lake. Bulk sediment accumulation rates accelerated again after 1980. Accumulation rates of total phosphorus (TP), total nitrogen (TN), total inorganic carbon (TIC), and total organic carbon (TOC) were calculated by multiplying bulk sediment accumulation rates times the concentrations of these nutrients in the sediment. Accumulation rates of TP, TN, TIC, and TOC increased as a consequence of increased bulk sediment accumulation rates after the re-connection of Bear River with Bear Lake.
NASA Astrophysics Data System (ADS)
Goyal, M.; Chakravarty, A.; Atrey, M. D.
2017-02-01
Performance of modern helium refrigeration/ liquefaction systems depends significantly on the effectiveness of heat exchangers. Generally, compact plate fin heat exchangers (PFHE) having very high effectiveness (>0.95) are used in such systems. Apart from basic fluid film resistances, various secondary parameters influence the sizing/ rating of these heat exchangers. In the present paper, sizing calculations are performed, using in-house developed numerical models/ codes, for a set of high effectiveness PFHE for a modified Claude cycle based helium liquefier/ refrigerator operating in the refrigeration mode without liquid nitrogen (LN2) pre-cooling. The combined effects of secondary parameters like axial heat conduction through the heat exchanger metal matrix, parasitic heat in-leak from surroundings and variation in the fluid/ metal properties are taken care of in the sizing calculation. Numerical studies are carried out to predict the off-design performance of the PFHEs in the refrigeration mode with LN2 pre-cooling. Iterative process cycle calculations are also carried out to obtain the inlet/ exit state points of the heat exchangers.
NASA Astrophysics Data System (ADS)
Michael, R. A.; Stuart, A. L.
2007-12-01
Phase partitioning during freezing affects the transport and distribution of volatile chemical species in convective clouds. This consequently can have impacts on tropospheric chemistry, air quality, pollutant deposition, and climate change. Here, we discuss the development, evaluation, and application of a mechanistic model for the study and prediction of volatile chemical partitioning during steady-state hailstone growth. The model estimates the fraction of a chemical species retained in a two-phase freezing hailstone. It is based upon mass rate balances over water and solute for accretion under wet-growth conditions. Expressions for the calculation of model components, including the rates of super-cooled drop collection, shedding, evaporation, and hail growth were developed and implemented based on available cloud microphysics literature. Solute fate calculations assume equilibrium partitioning at air-liquid and liquid-ice interfaces. Currently, we are testing the model by performing mass balance calculations, sensitivity analyses, and comparison to available experimental data. Application of the model will improve understanding of the effects of cloud conditions and chemical properties on the fate of dissolved chemical species during hail growth.
A user-friendly one-dimensional model for wet volcanic plumes
Mastin, Larry G.
2007-01-01
This paper presents a user-friendly graphically based numerical model of one-dimensional steady state homogeneous volcanic plumes that calculates and plots profiles of upward velocity, plume density, radius, temperature, and other parameters as a function of height. The model considers effects of water condensation and ice formation on plume dynamics as well as the effect of water added to the plume at the vent. Atmospheric conditions may be specified through input parameters of constant lapse rates and relative humidity, or by loading profiles of actual atmospheric soundings. To illustrate the utility of the model, we compare calculations with field-based estimates of plume height (∼9 km) and eruption rate (>∼4 × 105 kg/s) during a brief tephra eruption at Mount St. Helens on 8 March 2005. Results show that the atmospheric conditions on that day boosted plume height by 1–3 km over that in a standard dry atmosphere. Although the eruption temperature was unknown, model calculations most closely match the observations for a temperature that is below magmatic but above 100°C.
Medders, Gregory R.; Alguire, Ethan C.; Jain, Amber; ...
2017-01-18
Here, we employ surface hopping trajectories to model the short-time dynamics of gas-phase and partially solvated 4-(N,N-dimethylamino)benzonitrile (DMABN), a dual fluorescent molecule that is known to undergo a nonadiabatic transition through a conical intersection. To compare theory vs time-resolved fluorescence measurements, we calculate the mixed quantum–classical density matrix and the ensemble averaged transition dipole moment. We introduce a diabatization scheme based on the oscillator strength to convert the TDDFT adiabatic states into diabatic states of L a and L b character. Somewhat surprisingly, we find that the rate of relaxation reported by emission to the ground state is almost 50%more » slower than the adiabatic population relaxation. Although our calculated adiabatic rates are largely consistent with previous theoretical calculations and no obvious effects of decoherence are seen, the diabatization procedure introduced here enables an explicit picture of dynamics in the branching plane, raising tantalizing questions about geometric phase effects in systems with dozens of atoms.« less
Non-equilibrium radiation from viscous chemically reacting two-phase exhaust plumes
NASA Technical Reports Server (NTRS)
Penny, M. M.; Smith, S. D.; Mikatarian, R. R.; Ring, L. R.; Anderson, P. G.
1976-01-01
A knowledge of the structure of the rocket exhaust plumes is necessary to solve problems involving plume signatures, base heating, plume/surface interactions, etc. An algorithm is presented which treats the viscous flow of multiphase chemically reacting fluids in a two-dimensional or axisymmetric supersonic flow field. The gas-particle flow solution is fully coupled with the chemical kinetics calculated using an implicit scheme to calculate chemical production rates. Viscous effects include chemical species diffusion with the viscosity coefficient calculated using a two-equation turbulent kinetic energy model.
Monte Carlo dose calculations for high-dose-rate brachytherapy using GPU-accelerated processing.
Tian, Z; Zhang, M; Hrycushko, B; Albuquerque, K; Jiang, S B; Jia, X
2016-01-01
Current clinical brachytherapy dose calculations are typically based on the Association of American Physicists in Medicine Task Group report 43 (TG-43) guidelines, which approximate patient geometry as an infinitely large water phantom. This ignores patient and applicator geometries and heterogeneities, causing dosimetric errors. Although Monte Carlo (MC) dose calculation is commonly recognized as the most accurate method, its associated long computational time is a major bottleneck for routine clinical applications. This article presents our recent developments of a fast MC dose calculation package for high-dose-rate (HDR) brachytherapy, gBMC, built on a graphics processing unit (GPU) platform. gBMC-simulated photon transport in voxelized geometry with physics in (192)Ir HDR brachytherapy energy range considered. A phase-space file was used as a source model. GPU-based parallel computation was used to simultaneously transport multiple photons, one on a GPU thread. We validated gBMC by comparing the dose calculation results in water with that computed TG-43. We also studied heterogeneous phantom cases and a patient case and compared gBMC results with Acuros BV results. Radial dose function in water calculated by gBMC showed <0.6% relative difference from that of the TG-43 data. Difference in anisotropy function was <1%. In two heterogeneous slab phantoms and one shielded cylinder applicator case, average dose discrepancy between gBMC and Acuros BV was <0.87%. For a tandem and ovoid patient case, good agreement between gBMC and Acruos BV results was observed in both isodose lines and dose-volume histograms. In terms of the efficiency, it took ∼47.5 seconds for gBMC to reach 0.15% statistical uncertainty within the 5% isodose line for the patient case. The accuracy and efficiency of a new GPU-based MC dose calculation package, gBMC, for HDR brachytherapy make it attractive for clinical applications. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Permeability of model porous medium formed by random discs
NASA Astrophysics Data System (ADS)
Gubaidullin, A. A.; Gubkin, A. S.; Igoshin, D. E.; Ignatev, P. A.
2018-03-01
Two-dimension model of the porous medium with skeleton of randomly located overlapping discs is proposed. The geometry and computational grid are built in open package Salome. Flow of Newtonian liquid in longitudinal and transverse directions is calculated and its flow rate is defined. The numerical solution of the Navier-Stokes equations for a given pressure drop at the boundaries of the area is realized in the open package OpenFOAM. Calculated value of flow rate is used for defining of permeability coefficient on the base of Darcy law. For evaluating of representativeness of computational domain the permeability coefficients in longitudinal and transverse directions are compered.
Effects of atmospheric pressure conditions on flow rate of an elastomeric infusion pump.
Wang, Jong; Moeller, Anna; Ding, Yuanpang Samuel
2012-04-01
The effects of pressure conditions, both hyperbaric and hypobaric, on the flow rate of an elastomeric infusion pump were investigated. The altered pressure conditions were tested with the restrictor outlet at two different conditions: (1) at the same pressure condition as the Infusor elastomeric balloon and (2) with the outlet exposed to ambient conditions. Five different pressure conditions were tested. These included ambient pressure (98-101 kilopascals [kPa]) and test pressures controlled to be 10 or 20 kPa below or 75 or 150 kPa above the ambient pressure. A theoretical calculation based on the principles of fluid mechanics was also used to predict the pump's flow rate at various ambient conditions. The conditions in which the Infusor elastomeric pump and restrictor outlet were at the same pressure gave rise to average flow rates within the ±10% tolerance of the calculated target flow rate of 11 mL/hr. The flow rate of the Infusor pump decreased when the pressure conditions changed from hypobaric to ambient. The flow rate increased when the pressure conditions changed from hyperbaric to ambient. The flow rate of the Infusor elastomeric pump was not affected when the balloon reservoir and restrictor outlet were at the same pressure. The flow rate varied from 58.54% to 377.04% of the labeled flow rate when the pressure applied to the reservoir varied from 20 kPa below to 150 kPa above the pressure applied to the restrictor outlet, respectively. The maximum difference between observed flow rates and those calculated by applying fluid mechanics was 4.9%.
Electron-impact Ionization of P-like Ions Forming Si-like Ions
NASA Astrophysics Data System (ADS)
Kwon, D.-H.; Savin, D. W.
2014-03-01
We have calculated electron-impact ionization (EII) for P-like systems from P to Zn15 + forming Si-like ions. The work was performed using the flexible atomic code (FAC) which is based on a distorted-wave approximation. All 3l → nl' (n = 3-35) excitation-autoionization (EA) channels near the 3p direct ionization threshold and 2l → nl' (n = 3-10) EA channels at the higher energies are included. Close attention has been paid to the detailed branching ratios. Our calculated total EII cross sections are compared both with previous FAC calculations, which omitted many of these EA channels, and with the available experimental results. Moreover, for Fe11 +, we find that part of the remaining discrepancies between our calculations and recent measurements can be accounted for by the inclusion of the resonant excitation double autoionization process. Lastly, at the temperatures where each ion is predicted to peak in abundances in collisional ionization equilibrium, the Maxwellian rate coefficients derived from our calculations differ by 50%-7% from the previous FAC rate coefficients, with the difference decreasing with increasing charge.
Optimized Vertex Method and Hybrid Reliability
NASA Technical Reports Server (NTRS)
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
NASA Astrophysics Data System (ADS)
Wang, Yu-Nan; Yang, Jian; Xin, Xiu-Ling; Wang, Rui-Zhi; Xu, Long-Yun
2016-04-01
In the present study, the effect of cooling conditions on the evolution of non-metallic inclusions in high manganese TWIP steels was investigated based on experiments and thermodynamic calculations. In addition, the formation and growth behavior of AlN inclusions during solidification under different cooling conditions were analyzed with the help of thermodynamics and dynamics. The inclusions formed in the high manganese TWIP steels are classified into nine types: (1) AlN; (2) MgO; (3) CaS; (4) MgAl2O4; (5) AlN + MgO; (6) MgO + MgS; (7) MgO + MgS + CaS; (8) MgO + CaS; (9) MgAl2O4 + MgS. With the increase in the cooling rate, the volume fraction and area ratio of inclusions are almost constant; the size of inclusions decreases and the number density of inclusions increases in the steels. The thermodynamic results of inclusion types calculated with FactSage are consistent with the observed results. With increasing cooling rate, the diameter of AlN decreases. When the cooling rate increases from 0.75 to 4.83 K s-1, the measured average diameter of AlN decreases from 4.49 to 2.42 μm. Under the high cooling rate of 4.83 K s-1, the calculated diameter of AlN reaches 3.59 μm at the end of solidification. However, the calculated diameter of AlN increases to approximately 5.93 μm at the end of solidification under the low cooling rate of 0.75 K s-1. The calculated diameter of AlN decreases with increasing cooling rate. The theoretical calculation results of the change in diameter of AlN under the different cooling rates have the same trend with the observed results. The existences of inclusions in the steels, especially AlN which average sizes are 2.42 and 4.49 μm, respectively, are not considered to have obvious influences on the hot ductility.
An Improved Method of Pose Estimation for Lighthouse Base Station Extension.
Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang
2017-10-22
In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.
An Improved Method of Pose Estimation for Lighthouse Base Station Extension
Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang
2017-01-01
In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal. PMID:29065509
Development of a nuclear technique for monitoring water levels in pressurized vehicles
NASA Technical Reports Server (NTRS)
Singh, J. J.; Davis, W. T.; Mall, G. H.
1983-01-01
A new technique for monitoring water levels in pressurized stainless steel cylinders was developed. It is based on differences in attenuation coefficients of water and air for Cs137 (662 keV) gamma rays. Experimentally observed gamma ray counting rates with and without water in model reservoir cylinder were compared with corresponding calculated values for two different gamma ray detection theshold energies. Calculated values include the effects of multiple scattering and attendant gamma ray energy reductions. The agreement between the measured and calculated values is reasonably good. Computer programs for calculating angular and spectral distributions of scattered radition in various media are included.
NASA Astrophysics Data System (ADS)
Kojima, H.; Yamada, A.; Okazaki, S.
2015-05-01
The intramolecular proton transfer reaction of malonaldehyde in neon solvent has been investigated by mixed quantum-classical molecular dynamics (QCMD) calculations and fully classical molecular dynamics (FCMD) calculations. Comparing these calculated results with those for malonaldehyde in water reported in Part I [A. Yamada, H. Kojima, and S. Okazaki, J. Chem. Phys. 141, 084509 (2014)], the solvent dependence of the reaction rate, the reaction mechanism involved, and the quantum effect therein have been investigated. With FCMD, the reaction rate in weakly interacting neon is lower than that in strongly interacting water. However, with QCMD, the order of the reaction rates is reversed. To investigate the mechanisms in detail, the reactions were categorized into three mechanisms: tunneling, thermal activation, and barrier vanishing. Then, the quantum and solvent effects were analyzed from the viewpoint of the reaction mechanism focusing on the shape of potential energy curve and its fluctuations. The higher reaction rate that was found for neon in QCMD compared with that found for water solvent arises from the tunneling reactions because of the nearly symmetric double-well shape of the potential curve in neon. The thermal activation and barrier vanishing reactions were also accelerated by the zero-point energy. The number of reactions based on these two mechanisms in water was greater than that in neon in both QCMD and FCMD because these reactions are dominated by the strength of solute-solvent interactions.
Tsunami probability in the Caribbean Region
Parsons, T.; Geist, E.L.
2008-01-01
We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D; Schuemann, J; Dowdell, S
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavitiesmore » and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.« less
NASA Astrophysics Data System (ADS)
Glushkov, Alexander; Loboda, Andrey; Nikola, Ludmila
2011-10-01
We present the uniform energy approach, formally based on the gauge-invariant relativistic many-body perturbation theory for the calculation of the radiative and autoionization probabilities, electron collision strengths and rate coefficients in a multicharged ions (in a collisionally pumped plasma). An account for the plasma medium influence is carried out within a Debae shielding approach. The aim is to study, in a uniform manner, elementary processes responsible for emission-line formation in a plasma. The energy shift due to the collision is arisen at first in the second PT order in the form of integral on the scattered electron energy. The cross-section is linked with imaginary part of the scattering energy shift. The electron collision excitation cross-sections and rate coefficients for some plasma Ne-, Ar-like multicharged ions are calculated within relativistic energy approach. We present the results of calculation the autoionization resonances energies and widths in heavy He-like multicharged ions and rare-earth atoms of Gd and Tm. To test the results of calculations we compare the obtained data for some Ne-like ions with other authors' calculations and available experimental data for a wide range of plasma conditions.
Nuclear fusion and carbon flashes on neutron stars
NASA Technical Reports Server (NTRS)
Taam, R. E.; Picklum, R. E.
1978-01-01
This paper reports on detailed calculations of the thermal evolution of the carbon-burning shells in the envelopes of accreting neutron stars for mass-accretion rates of 1 hundred-billionth to 2 billionths of a solar mass per yr and neutron-star masses of 0.56 and 1.41 solar masses. The work of Hansen and Van Horn (1975) is extended to higher densities, and a more detailed treatment of nuclear processing in the hydrogen- and helium-burning regions is included. Results of steady-state calculations are presented, and results of time-dependent computations are examined for accretion rates of 3 ten-billionths and 1 billionth of solar mass per yr. It is found that two evolutionary sequences lead to carbon flashes and that the carbon abundance at the base of the helium shell is a strong function of accretion rate. Upper limits are placed on the accretion rates at which carbon flashes will be important.
Incentivizing Decentralized Sanitation: The Role of Discount Rates.
Wood, Alison; Blackhurst, Michael; Garland, Jay L; Lawler, Desmond F
2016-06-21
In adoption decisions for decentralized sanitation technologies, two decision makers are involved: the public utility and the individual homeowner. Standard life cycle cost is calculated from the perspective of the utility, which uses a market-based discount rate in these calculations. However, both decision-makers must be considered, including their differing perceptions of the time trade-offs inherent in a stream of costs and benefits. This study uses the discount rate as a proxy for these perceptions and decision-maker preferences. The results in two case studies emphasize the dependence on location of such analyses. Falmouth, Massachusetts, appears to be a good candidate for incentivizing decentralized sanitation while the Allegheny County Sanitary Authority service area in Pennsylvania appears to have no need for similar incentives. This method can be applied to any two-party decision in which the parties are expected to have different discount rates.
A Study on Multi-Swing Stability Analysis of Power System using Damping Rate Inversion
NASA Astrophysics Data System (ADS)
Tsuji, Takao; Morii, Yuki; Oyama, Tsutomu; Hashiguchi, Takuhei; Goda, Tadahiro; Nomiyama, Fumitoshi; Kosugi, Narifumi
In recent years, much attention is paid to the nonlinear analysis method in the field of stability analysis of power systems. Especially for the multi-swing stability analysis, the unstable limit cycle has an important meaning as a stability margin. It is required to develop a high speed calculation method of stability boundary regarding multi-swing stability because the real-time calculation of ATC is necessary to realize the flexible wheeling trades. Therefore, the authors have developed a new method which can calculate the unstable limit cycle based on damping rate inversion method. Using the unstable limit cycle, it is possible to predict the multi-swing stability at the time when the fault transmission line is reclosed. The proposed method is tested in Lorenz equation, single-machine infinite-bus system model and IEEJ WEST10 system model.
NASA Technical Reports Server (NTRS)
Marti, K.; Lavielle, B.; Regnier, S.
1984-01-01
While previous calculations of potassium ages assumed a constant cosmic ray flux and a single stage (no change in size) exposure of iron meteorites, present calculations relaxed these constancy assumptions and the results reveal multistage irradiations for some 25% of the meteorites studied, implying multiple breakup in space. The distribution of exposure ages suggests several major collisions (based on chemical composition and structure), although the calibration of age scales is not yet complete. It is concluded that shielding-corrected (corrections which depend on size and position of sample) production rates are consistent for the age bracket of 300 to 900 years. These production rates differ in a systematic way from those calculated for present day fluxes of cosmic rays (such as obtained for the last few million years).
NASA Technical Reports Server (NTRS)
Bergrun, Norman R
1952-01-01
An empirically derived basis for predicting the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The concepts involved represent an initial step toward the development of a calculation technique which is generally applicable to the design of thermal ice-prevention equipment for airplane wing and tail surfaces. It is shown that sufficiently accurate estimates, for the purpose of heated-wing design, can be obtained by a few numerical computations once the velocity distribution over the airfoil has been determined. The calculation technique presented is based on results of extensive water-drop trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer.
NASA Technical Reports Server (NTRS)
Dickerson, R. R.; Stedman, D. H.; Chameides, W. L.; Crutzen, P. J.; Fishman, J.
1979-01-01
The paper presents an experimental technique which measures j/O3-O(1-D)/, the rate of solar photolysis of ozone to singlet oxygen atoms. It is shown that a flow actinometer carries dilute O3 in N2O into direct sunlight where the O(1D) formed reacts with N2O to form NO which chemiluminescence detects, with a time resolution of about one minute. Measurements indicate a photolysis rate of 1.2 (+ or - .2) x 10 to the -5/s for a cloudless sky, 45 deg zenith angle, 0.345 cm ozone column and zero albedo. Finally, ground level results compare with theoretical calculations based on the UV actinic flux as a function of ozone column and solar zenith angle.
Sub-second pencil beam dose calculation on GPU for adaptive proton therapy
NASA Astrophysics Data System (ADS)
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-06-01
Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.
The study to estimate the floating population in Seoul, Korea.
Lee, Geon Woo; Lee, Yong Jin; Kim, Youngeun; Hong, Seung-Han; Kim, Soohwaun; Kim, Jeong Soo; Lee, Jong Tae; Shin, Dong Chun; Lim, Youngwook
2017-01-01
Traffic-related pollutants have been reported to increase the morbidity of respiratory diseases. In order to apply management policies related to motor vehicles, studies of the floating population living in cities are important. The rate of metro rail transit system use by passengers residing in Seoul is about 54% of total public transportation use. Through the rate of metro use, the people-flow ratios in each administrative area were calculated. By applying a people-flow ratio based on the official census count, the floating population in 25 regions was calculated. The reduced level of deaths among the floating population in 14 regions having the roadside monitoring station was calculated as assuming a 20% reduction of mobile emission based on the policy. The hourly floating population size was calculated by applying the hourly population ratio to the regional population size as specified in the official census count. The number of people moving from 5 a.m. to next day 1 a.m. could not be precisely calculated when the population size was applied, but no issue was observed that would trigger a sizable shift in the rate of population change. The three patterns of increase, decrease, and no change of population in work hours were analyzed. When the concentration of particulate matter less than 10 μm in aerodynamic diameter was reduced by 20%, the number of excess deaths varied according to the difference of the floating population. The effective establishment of directions to manage the pollutants in cities should be carried out by considering the floating population. Although the number of people using the metro system is only an estimate, this disadvantage was supplemented by calculating inflow and outflow ratio of metro users per time in the total floating population in each region. Especially, 54% of metro usage in public transport causes high reliability in application.
Ab initio state-specific N2 + O dissociation and exchange modeling for molecular simulations
NASA Astrophysics Data System (ADS)
Luo, Han; Kulakhmetov, Marat; Alexeenko, Alina
2017-02-01
Quasi-classical trajectory (QCT) calculations are used in this work to calculate state-specific N2(X1Σ ) +O(3P ) →2 N(4S ) +O(3P ) dissociation and N2(X1Σ ) +O(3P ) →NO(X2Π ) +N(4S ) exchange cross sections and rates based on the 13A″ and 13A' ab initio potential energy surface by Gamallo et al. [J. Chem. Phys. 119, 2545-2556 (2003)]. The calculations consider translational energies up to 23 eV and temperatures between 1000 K and 20 000 K. Vibrational favoring is observed for dissociation reaction at the whole range of collision energies and for exchange reaction around the dissociation limit. For the same collision energy, cross sections for v = 30 are 4 to 6 times larger than those for the ground state. The exchange reaction has an effective activation energy that is dependent on the initial rovibrational level, which is different from dissociation reaction. In addition, the exchange cross sections have a maximum when the total collision energy (TCE) approaches dissociation energy. The calculations are used to generate compact QCT-derived state-specific dissociation (QCT-SSD) and QCT-derived state-specific exchange (QCT-SSE) models, which describe over 1 × 106 cross sections with about 150 model parameters. The models can be used directly within direct simulation Monte Carlo and computational fluid dynamics simulations. Rate constants predicted by the new models are compared to the experimental measurements, direct QCT calculations and predictions by other models that include: TCE model, Bose-Candler QCT-based exchange model, Macheret-Fridman dissociation model, Macheret's exchange model, and Park's two-temperature model. The new models match QCT-calculated and experimental rates within 30% under nonequilibrium conditions while other models under predict by over an order of magnitude under vibrationally-cold conditions.
Webber, Mayris P.; Moir, William; Crowson, Cynthia S.; Cohen, Hillel W.; Zeig-Owens, Rachel; Hall, Charles B.; Berman, Jessica; Qayyum, Basit; Jaber, Nadia; Matteson, Eric L.; Liu, Yang; Kelly, Kerry; Prezant, David J.
2016-01-01
Objective To estimate the incidence of selected systemic autoimmune diseases (SAIDs) in approximately 14,000 male rescue/recovery workers enrolled in the Fire Department of the City of New York (FDNY) World Trade Center (WTC) Health Program and to compare FDNY incidence to rates from demographically similar men in the Rochester Epidemiology Project (REP), a population-based database in Olmsted County, Minnesota. Patients and Methods We calculated incidence for specific SAIDs (rheumatoid arthritis, psoriatic arthritis, systemic lupus erythematosus, and others) and combined SAIDs diagnosed from September 12, 2001, through September 11, 2014, and generated expected sex- and age-specific rates based on REP rates. Rates were stratified by level of WTC exposure (higher vs lower). Standardized incidence ratios (SIRs), which are the ratios of the observed number of cases in the FDNY group to the expected number of cases based on REP rates, and 95% CIs were calculated. Results We identified 97 SAID cases. Overall, FDNY rates were not significantly different from expected rates (SIR, 0.97; 95% CI, 0.77–1.21). However, the lower WTC exposure group had 9.9 fewer cases than expected, whereas the higher WTC exposure group had 7.7 excess cases. Conclusion Most studies indicate that the healthy worker effect reduces the association between exposure and outcome by about 20%, which we observed in the lower WTC exposure group. Overall rates masked differences in incidence by level of WTC exposure, especially because the higher WTC exposure group was relatively small. Continued surveillance for early detection of SAIDs in high WTC exposure populations is required to identify and treat exposure-related adverse effects. PMID:26682920
Webber, Mayris P; Moir, William; Crowson, Cynthia S; Cohen, Hillel W; Zeig-Owens, Rachel; Hall, Charles B; Berman, Jessica; Qayyum, Basit; Jaber, Nadia; Matteson, Eric L; Liu, Yang; Kelly, Kerry; Prezant, David J
2016-01-01
To estimate the incidence of selected systemic autoimmune diseases (SAIDs) in approximately 14,000 male rescue/recovery workers enrolled in the Fire Department of the City of New York (FDNY) World Trade Center (WTC) Health Program and to compare FDNY incidence to rates from demographically similar men in the Rochester Epidemiology Project (REP), a population-based database in Olmsted County, Minnesota. We calculated incidence for specific SAIDs (rheumatoid arthritis, psoriatic arthritis, systemic lupus erythematosus, and others) and combined SAIDs diagnosed from September 12, 2001, through September 11, 2014, and generated expected sex- and age-specific rates based on REP rates. Rates were stratified by level of WTC exposure (higher vs lower). Standardized incidence ratios (SIRs), which are the ratios of the observed number of cases in the FDNY group to the expected number of cases based on REP rates, and 95% CIs were calculated. We identified 97 SAID cases. Overall, FDNY rates were not significantly different from expected rates (SIR, 0.97; 95% CI, 0.77-1.21). However, the lower WTC exposure group had 9.9 fewer cases than expected, whereas the higher WTC exposure group had 7.7 excess cases. Most studies indicate that the healthy worker effect reduces the association between exposure and outcome by about 20%, which we observed in the lower WTC exposure group. Overall rates masked differences in incidence by level of WTC exposure, especially because the higher WTC exposure group was relatively small. Continued surveillance for early detection of SAIDs in high WTC exposure populations is required to identify and treat exposure-related adverse effects. Copyright © 2016. Published by Elsevier Inc.
Evaluating Iowa Severe Maternal Morbidity Trends and Maternal Risk Factors: 2009-2014.
Frederiksen, Brittni N; Lillehoj, Catherine J; Kane, Debra J; Goodman, Dave; Rankin, Kristin
2017-09-01
Objectives To describe statewide SMM trends in Iowa from 2009 to 2014 and identify maternal characteristics associated with SMM, overall and by age group. Methods We used 2009-2014 linked Iowa birth certificate and hospital discharge data to calculate SMM based on a 25-condition definition and 24-condition definition. The 24-condition definition parallels the 25-condition definition, but excludes blood transfusions. We calculated SMM rates for all delivery hospitalizations (N = 196,788) using ICD-9-CM diagnosis and procedure codes. We used log-binomial regression to assess the association of SMM with maternal characteristics, overall and stratified by age groupings. Results In contrast to national rates, Iowa's 25-condition SMM rate decreased from 2009 to 2014. Based on the 25-condition definition, SMM rates were significantly higher among women <20 years and >34 years compared to women 25-34 years. Blood transfusion was the most prevalent indicator, with hysterectomy and disseminated intravascular coagulation (DIC) among the top five conditions. Based on the 24-condition definition, younger women had the lowest SMM rates and older women had the highest SMM rates. SMM rates were also significantly higher among racial/ethnic minorities compared to non-Hispanic white women. Payer was the only risk factor differentially associated with SMM across age groups. First trimester prenatal care initiation was protective for SMM in all models. Conclusions High rates of blood transfusion, hysterectomy, and DIC indicate a need to focus on reducing hemorrhage in Iowa. Both younger and older women and racial/ethnic minorities are identified as high risk groups for SMM that may benefit from special consideration and focus.
NASA Astrophysics Data System (ADS)
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
Mass gathering medical care: to calculate the Medical Usage Rate of Galway Races.
Shah, Waqar
2010-01-01
Medical Usage Rate (MUR) of Galway Races: The Galway Races is the most popular horse-racing festival in Ireland. It takes place for a week starting from the last Monday in July. The races are held at Ballybrit race course in Galway. During the 7 days of racing, about 180,000 people attend. The average temperature in Galway around that time of the year is around 15-200C. The aim of this study is to calculate the MUR of Galway Races and to develop a model to predict the MUR for Galway Races in future. The MUR of Galway Races is calculated by looking retrospectively at the medical records of the last 11 years of Galway Races from 1997 to 2007. The Galway Races has a MUR of 3.67 patient per ten thousand. Based on the figures for last 10 years, the predictive MUR for Galway Races 2008 calculated before the races and actual figures in 2008 races is comparable.
Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor
Nathues, Christina; Würbel, Hanno
2016-01-01
Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm–benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman’s rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm–benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research. PMID:27911892
Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor.
Vogt, Lucile; Reichlin, Thomas S; Nathues, Christina; Würbel, Hanno
2016-12-01
Accumulating evidence indicates high risk of bias in preclinical animal research, questioning the scientific validity and reproducibility of published research findings. Systematic reviews found low rates of reporting of measures against risks of bias in the published literature (e.g., randomization, blinding, sample size calculation) and a correlation between low reporting rates and inflated treatment effects. That most animal research undergoes peer review or ethical review would offer the possibility to detect risks of bias at an earlier stage, before the research has been conducted. For example, in Switzerland, animal experiments are licensed based on a detailed description of the study protocol and a harm-benefit analysis. We therefore screened applications for animal experiments submitted to Swiss authorities (n = 1,277) for the rates at which the use of seven basic measures against bias (allocation concealment, blinding, randomization, sample size calculation, inclusion/exclusion criteria, primary outcome variable, and statistical analysis plan) were described and compared them with the reporting rates of the same measures in a representative sub-sample of publications (n = 50) resulting from studies described in these applications. Measures against bias were described at very low rates, ranging on average from 2.4% for statistical analysis plan to 19% for primary outcome variable in applications for animal experiments, and from 0.0% for sample size calculation to 34% for statistical analysis plan in publications from these experiments. Calculating an internal validity score (IVS) based on the proportion of the seven measures against bias, we found a weak positive correlation between the IVS of applications and that of publications (Spearman's rho = 0.34, p = 0.014), indicating that the rates of description of these measures in applications partly predict their rates of reporting in publications. These results indicate that the authorities licensing animal experiments are lacking important information about experimental conduct that determines the scientific validity of the findings, which may be critical for the weight attributed to the benefit of the research in the harm-benefit analysis. Similar to manuscripts getting accepted for publication despite poor reporting of measures against bias, applications for animal experiments may often be approved based on implicit confidence rather than explicit evidence of scientific rigor. Our findings shed serious doubt on the current authorization procedure for animal experiments, as well as the peer-review process for scientific publications, which in the long run may undermine the credibility of research. Developing existing authorization procedures that are already in place in many countries towards a preregistration system for animal research is one promising way to reform the system. This would not only benefit the scientific validity of findings from animal experiments but also help to avoid unnecessary harm to animals for inconclusive research.
NASA Astrophysics Data System (ADS)
Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman
2014-12-01
The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.
Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L
2017-11-01
Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.
Testing the Predictive Power of Coulomb Stress on Aftershock Sequences
NASA Astrophysics Data System (ADS)
Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.
2009-12-01
Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.
Statistical numeracy as a moderator of (pseudo)contingency effects on decision behavior.
Fleig, Hanna; Meiser, Thorsten; Ettlin, Florence; Rummel, Jan
2017-03-01
Pseudocontingencies denote contingency estimates inferred from base rates rather than from cell frequencies. We examined the role of statistical numeracy for effects of such fallible but adaptive inferences on choice behavior. In Experiment 1, we provided information on single observations as well as on base rates and tracked participants' eye movements. In Experiment 2, we manipulated the availability of information on cell frequencies and base rates between conditions. Our results demonstrate that a focus on base rates rather than cell frequencies benefits pseudocontingency effects. Learners who are more proficient in (conditional) probability calculation prefer to rely on cell frequencies in order to judge contingencies, though, as was evident from their gaze behavior. If cell frequencies are available in summarized format, they may infer the true contingency between options and outcomes. Otherwise, however, even highly numerate learners are susceptible to pseudocontingency effects. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-12-01
Aims: A simplified model is derived for estimating rate coefficients for inelastic processes in low-energy collisions of heavy particles with hydrogen, in particular, the rate coefficients with high and moderate values. Such processes are important for non-local thermodynamic equilibrium modeling of cool stellar atmospheres. Methods: The derived method is based on the asymptotic approach for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: It is found that the rate coefficients are expressed via statistical probabilities and reduced rate coefficients. It is shown that the reduced rate coefficients for neutralization and ion-pair formation processes depend on single electronic bound energies of an atomic particle, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to barium-hydrogen ionic collisions. For the first time, rate coefficients are evaluated for inelastic processes in Ba+ + H and Ba2+ + H- collisions for all transitions between the states from the ground and up to and including the ionic state. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A33
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-10-01
Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A147
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manning, Karessa L.; Dolislager, Fredrick G.; Bellamy, Michael B.
The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants frommore » soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.« less
Teschke, Kay; Koehoorn, Mieke; Shen, Hui; Dennis, Jessica
2015-01-01
Objectives The purpose of this study was to calculate exposure-based bicycling hospitalisation rates in Canadian jurisdictions with different helmet legislation and bicycling mode shares, and to examine whether the rates were related to these differences. Methods Administrative data on hospital stays for bicycling injuries to 10 body region groups and national survey data on bicycling trips were used to calculate hospitalisation rates. Rates were calculated for 44 sex, age and jurisdiction strata for all injury causes and 22 age and jurisdiction strata for traffic-related injury causes. Inferential analyses examined associations between hospitalisation rates and sex, age group, helmet legislation and bicycling mode share. Results In Canada, over the study period 2006–2011, there was an average of 3690 hospitalisations per year and an estimated 593 million annual trips by bicycle among people 12 years of age and older, for a cycling hospitalisation rate of 622 per 100 million trips (95% CI 611 to 633). Hospitalisation rates varied substantially across the jurisdiction, age and sex strata, but only two characteristics explained this variability. For all injury causes, sex was associated with hospitalisation rates; females had rates consistently lower than males. For traffic-related injury causes, higher cycling mode share was consistently associated with lower hospitalisation rates. Helmet legislation was not associated with hospitalisation rates for brain, head, scalp, skull, face or neck injuries. Conclusions These results suggest that transportation and health policymakers who aim to reduce bicycling injury rates in the population should focus on factors related to increased cycling mode share and female cycling choices. Bicycling routes designed to be physically separated from traffic or along quiet streets fit both these criteria and are associated with lower relative risks of injury. PMID:26525719
Factors influencing variation in dentist service rates.
Grembowski, D; Milgrom, P; Fiset, L
1990-01-01
In the previous article, we calculated dentist service rates for 200 general dentists based on a homogeneous, well-educated, upper-middle-class population of patients. Wide variations in the rates were detected. In this analysis, factors influencing variation in the rates were identified. Variation in rates for categories of dental services was explained by practice characteristics, patient exposure to fluoridated water supplies, and non-price competition in the dental market. Rates were greatest in large, busy practices in markets with high fees. Older practices consistently had lower rates across services. As a whole, these variables explained between 5 and 30 percent of the variation in the rates.
Metastable Solution Thermodynamic Properties and Crystal Growth Kinetics
NASA Technical Reports Server (NTRS)
Kim, Soojin; Myerson, Allan S.
1996-01-01
The crystal growth rates of NH4H2PO4, KH2PO4, (NH4)2SO4, KAl(SO4)2 central dot 12H2O, NaCl, and glycine and the nucleation rates of KBr, KCl, NaBr central dot 2H2O, (NH4)2Cl, and (NH4)2SO4 were expressed in terms of the fundamental driving force of crystallization calculated from the activity of supersaturated solutions. The kinetic parameters were compared with those from the commonly used kinetic expression based on the concentration difference. From the viewpoint of thermodynamics, rate expressions based on the chemical potential difference provide accurate kinetic representation over a broad range of supersaturation. The rates estimated using the expression based on the concentration difference coincide with the true rates of crystallization only in the concentration range of low supersaturation and deviate from the true kinetics as the supersaturation increases.
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon
2012-10-01
This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, p<.001). Experimental group students had higher ability to perform drug dosage calculations than control group students (t=3.98, p<.001), with regard to 'metric conversion' (t=2.25, p=.027), 'table dosage calculation' (t=2.20, p=.031) and 'drop rate calculation' (t=4.60, p<.001). There was no difference in improvement in 'anxiety for drug dosage calculation'. Mean satisfaction score for the program was 86.1. These results indicate that this drug dosage calculation training program using smartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.
Code of Federal Regulations, 2010 CFR
2010-04-01
... bear interest or a specified return, disclose that fact. If the rate is based on a formula or is calculated in reference to a generally recognized interest rate index, such as a U.S. Treasury securities... prospectus. Present information regarding multiple classes in tables if doing so will aid understanding. If...
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2011 CFR
2011-01-01
... one year; • Minimum and maximum downgrade probability cutoff values, based on data from June 30, 2008... rate factor (Ai,T) is calculated by subtracting 0.4 from the four-year cumulative gross asset growth... weighted average of five component ratings excluding the “S” component. Delinquency and non-accrual data on...
A physically based analytical spatial air temperature and humidity model
Yang Yang; Theodore A. Endreny; David J. Nowak
2013-01-01
Spatial variation of urban surface air temperature and humidity influences human thermal comfort, the settling rate of atmospheric pollutants, and plant physiology and growth. Given the lack of observations, we developed a Physically based Analytical Spatial Air Temperature and Humidity (PASATH) model. The PASATH model calculates spatial solar radiation and heat...
7 CFR 760.909 - Payment calculation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... based on 26 percent of the average fair market value of the livestock. (c) The 2005-2007 LIP national payment rate for eligible livestock contract growers is based on 26 percent of the average income loss... this part); (2) For the loss of income from the dead livestock from the party who contracted with the...
7 CFR 760.909 - Payment calculation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... based on 26 percent of the average fair market value of the livestock. (c) The 2005-2007 LIP national payment rate for eligible livestock contract growers is based on 26 percent of the average income loss... this part); (2) For the loss of income from the dead livestock from the party who contracted with the...
Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring
ERIC Educational Resources Information Center
Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.
2015-01-01
Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…
Kinetic study on bonding reaction of gelatin with CdS nanopaticles by UV-visible spectroscopy.
Tang, Shihua; Wang, Baiyang; Li, Youqun
2015-04-15
The chemical kinetics on gelatin-CdS direct conjugates has been systematically investigated as a function of different temperature and reactant concentration (i.e. Cd(2+), S(2-) and gelatin) by UV-visible spectroscopy, for the first time. The nonlinear fitting and the differential method were used to calculate the initial rate based on the absorbance-time data. A double logarithmic linear equation for calculating the rate constant (k) and the reaction order (n) was introduced. The reaction kinetic parameters (n, k, Ea, and Z) and activation thermodynamic parameters (ΔG(≠), ΔH(≠), and ΔS(≠)) were obtained from variable temperature kinetic studies. The overall rate equation allowing evaluation of conditions that provide required reaction rate could be expressed as: r = 1.11 × 10(8) exp(-4971/T)[Cd(2+)][gelatin](0.6)[S(2-)](0.6) (M/S) The calculated values of the reaction rate are well coincide with the experimental results. A suitable kinetic model is also proposed. This work will provide guidance for the rational design of gelatin-directed syntheses of metal sulfide materials, and help to understand the biological effects of nanoparticles at the molecular level. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao
2017-05-01
To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.
Vector Analysis of Ionic Collision on CaCO3 Precipitation Based on Vibration Time History
NASA Astrophysics Data System (ADS)
Mangestiyono, W.; Muryanto, S.; Jamari, J.; Bayuseno, A. P.
2017-05-01
Vibration effects on the piping system can result from the internal factor of fluid or the external factor of the mechanical equipment operation. As the pipe vibrated, the precipitation process of CaCO3 on the inner pipe could be affected. In the previous research, the effect of vibration on CaCO3 precipitation in piping system was clearly verified. This increased the deposition rate and decreased the induction time. However, the mechanism of vibration control in CaCO3 precipitation process as the presence of vibration has not been recognized yet. In the present research, the mechanism of vibration affecting the CaCO3 precipitation was investigated through vector analysis of ionic collision. The ionic vector force was calculated based on the amount of the activation energy and the vibration force was calculated based on the vibration sensor data. The vector resultant of ionic collision based on the vibration time history was analyzed to prove that vibration brings ionic collision randomly to the planar horizontal direction and its collision model was suspected as the cause of the increasing deposition rate.
39 CFR 3010.23 - Calculation of percentage change in rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... class of mail, the percentage change in rates is calculated in three steps. First, the volume of each... 39 Postal Service 1 2011-07-01 2011-07-01 false Calculation of percentage change in rates. 3010.23... DOMINANT PRODUCTS Rules for Applying the Price Cap § 3010.23 Calculation of percentage change in rates. (a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musilek, Ladislav; Polach, Tomas; Trojek, Tomas
2008-08-07
Thermoluminescence (TL) dating is based on accumulating the natural radiation dose in the material of a dated artefact (brick, pottery, etc.), and comparing the dose accumulated during the lifetime of the object with the dose rate within the sample collected for TL measurement. Determining the dose rate from natural radionuclides in materials is one of the most important and most difficult parts of the technique. The most important radionuclides present are usually nuclides of the uranium and thorium decay series and {sup 40}K. An analysis of the total potassium concentration enables us to determine the {sup 40}K content effectively, andmore » from this it is possible to calculate the dose rate originating from this radiation source. X-ray fluorescence (XRF) analysis can be used to determine the potassium concentration in bricks rapidly and efficiently. The procedure for analysing potassium, examples of results of dose rate calculation and possible sources of error are described here.« less
Rostami, Mehran; Karamouzian, Mohammad; Khosravi, Ardeshir; Rezaeian, Shahab
2018-06-01
We aimed to compare the fatal drug overdose rates in Iran in 2006 and 2011. This analysis was performed based on data on fatal drug overdose cases from the Iranian death registration system. The crude and adjusted rates per 100,000 populations for geographical regions stratified by gender and age groups were calculated using the 2006 and 2011 census of Iranian population. Annual percentage change was calculated to examine annual changes of fatal drug overdose rates across different regions. The overall age-adjusted rate of fatal drug overdose decreased from 3.62 in 2006 to 2.77 in 2011. A substantial difference in the distribution of fatal drug overdoses was found across geographical regions by gender and age groups. Rates of fatal drug overdose were higher among Iranian men and in both younger and older age groups which call for scaling up harm reduction and increasing access to gender- and age-specific substance use treatment services. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wong, Michael H.; Atreya, Sushil K.; Kuhn, William R.; Romani, Paul N.; Mihalka, Kristen M.
2015-01-01
Models of cloud condensation under thermodynamic equilibrium in planetary atmospheres are useful for several reasons. These equilibrium cloud condensation models (ECCMs) calculate the wet adiabatic lapse rate, determine saturation-limited mixing ratios of condensing species, calculate the stabilizing effect of latent heat release and molecular weight stratification, and locate cloud base levels. Many ECCMs trace their heritage to Lewis (Lewis, J.S. [1969]. Icarus 10, 365-378) and Weidenschilling and Lewis (Weidenschilling, S.J., Lewis, J.S. [1973]. Icarus 20, 465-476). Calculation of atmospheric structure and gas mixing ratios are correct in these models. We resolve errors affecting the cloud density calculation in these models by first calculating a cloud density rate: the change in cloud density with updraft length scale. The updraft length scale parameterizes the strength of the cloud-forming updraft, and converts the cloud density rate from the ECCM into cloud density. The method is validated by comparison with terrestrial cloud data. Our parameterized updraft method gives a first-order prediction of cloud densities in a “fresh” cloud, where condensation is the dominant microphysical process. Older evolved clouds may be better approximated by another 1-D method, the diffusive-precipitative Ackerman and Marley (Ackerman, A.S., Marley, M.S. [2001]. Astrophys. J. 556, 872-884) model, which represents a steady-state equilibrium between precipitation and condensation of vapor delivered by turbulent diffusion. We re-evaluate observed cloud densities in the Galileo Probe entry site (Ragent, B. et al. [1998]. J. Geophys. Res. 103, 22891-22910), and show that the upper and lower observed clouds at ∼0.5 and ∼3 bars are consistent with weak (cirrus-like) updrafts under conditions of saturated ammonia and water vapor, respectively. The densest observed cloud, near 1.3 bar, requires unexpectedly strong updraft conditions, or higher cloud density rates. The cloud density rate in this layer may be augmented by a composition with non-NH4SH components (possibly including adsorbed NH3).
White, A.F.; Blum, A.E.; Schulz, M.S.; Vivit, D.V.; Stonestrom, David A.; Larsen, M.; Murphy, S.F.; Eberl, D.
1998-01-01
The pristine Rio Icacos watershed in the Luquillo Mountains in eastern Puerto Rico has the fastest documented weathering rate of silicate rocks on the Earth's surface. A regolith propagation rate of 58 m Ma-1 calculated from iso-volumetric saprolite formation from quartz diorite, is comparable to the estimated denudation rate (25-50 Ma-1) but is an order of magnitude faster than the global average weathering rate (6 Ma-1). Weathering occurs in two distinct environments; plagioclase and hornblende react at the saprock interface and biotite and quartz weather in the overlying thick saprolitic regolith. These environments produce distinctly different water chemistries, with K, Mg, and Si increasing linearly with depth in saprolite porewaters and with stream waters dominated by Ca, Na, and Si. Such differences are atypical of less intense weathering in temperate watersheds. Porewater chemistry in the shallow regolith is controlled by closed-system recycling of inorganic nutrients such as K. Long-term elemental fluxes through the regolith (e.g., Si = 1.7 ?? 10-8 moles m-2 s-1) are calculated from mass losses based on changes in porosity and chemistry between the regolith and bedrock and from the age of the regolith surface (200 Ma). Mass losses attributed to solute fluxes are determined using a step-wise infiltration model which calculates mineral inputs to the shallow and deep saprolite porewaters and to stream water. Pressure heads decrease with depth in the shallow regolith (-2.03 m H2O m-1), indicating that both increasing capillary tension and graviometric potential control porewater infiltration. Interpolation of experimental hydraulic conductivities produces an infiltration rate of 1 m yr-1 at average field moisture saturation which is comparable with LiBr tracer tests and with base discharge from the watershed. Short term weathering fluxes calculated from solute chemistries and infiltration rates (e.g., Si = 1.4 ?? 10-8 moles m-2 s-1) are compared to watershed flux rates (e.g., Si = 2.7 ?? 10-8 moles m-2 s-1). Consistency between three independently determined sets of weathering fluxes imply that possible changes in precipitation, temperature, and vegetation over the last several hundred thousand years have not significantly impacted weathering rates in the Luquillo Mountains of Puerto Rico. This has important ramifications for tropical environments and global climate change. Copyright ?? 1998 Elsevier Science Ltd.
The longitudinal study of turnover and the cost of turnover in EMS
Patterson, P. Daniel; Jones, Cheryl B.; Hubble, Michael W.; Carr, Matthew; Weaver, Matthew D.; Engberg, John; Castle, Nicholas
2010-01-01
Purpose Few studies have examined employee turnover and associated costs in emergency medical services (EMS). The purpose of this study was to quantify the mean annual rate of turnover, total median cost of turnover, and median cost per termination in a diverse sample of EMS agencies. Methods A convenience sample of 40 EMS agencies was followed over a 6 month period. Internet, telephone, and on-site data collection methods were used to document terminations, new hires, open positions, and costs associated with turnover. The cost associated with turnover was calculated based on a modified version of the Nursing Turnover Cost Calculation Methodology (NTCCM). The NTCCM identified direct and indirect costs through a series of questions that agency administrators answered monthly during the study period. A previously tested measure of turnover to calculate the mean annual rate of turnover was used. All calculations were weighted by the size of the EMS agency roster. The mean annual rate of turnover, total median cost of turnover, and median cost per termination were determined for 3 categories of agency staff mix: all paid staff, mix of paid and volunteer (mixed), and all-volunteer. Results The overall weighted mean annual rate of turnover was 10.7%. This rate varied slightly across agency staffing mix: (all-paid=10.2%, mixed=12.3%, all-volunteer=12.4%). Among agencies that experienced turnover (n=25), the weighted median cost of turnover was $71,613.75, which varied across agency staffing mix: (all-paid=$86,452.05, mixed=$9,766.65, and all-volunteer=$0). The weighted median cost per termination was $6,871.51 and varied across agency staffing mix: (all-paid=$7,161.38, mixed=$1,409.64, and all-volunteer=$0). Conclusions Annual rates of turnover and costs associated with turnover vary widely across types of EMS agencies. The study’s mean annual rate of turnover was lower than expected based on information appearing in the news media and EMS trade magazines. Findings provide estimates of two key workforce measures – turnover rates and costs – where previously none have existed. Local EMS directors and policymakers at all levels of government may find the results and study methodology useful towards designing and evaluating programs targeting the EMS workforce. PMID:20199235
Precisely and Accurately Inferring Single-Molecule Rate Constants
Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.
2017-01-01
The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280
Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force
NASA Astrophysics Data System (ADS)
Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.
2016-01-01
The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terentyev, V S; Simonov, V A
2016-02-28
Numerical modelling demonstrates the possibility of fabricating an all-fibre multibeam two-mirror reflection interferometer based on a metal–dielectric diffraction structure in its front mirror. The calculations were performed using eigenmodes of a double-clad single-mode fibre. The calculation results indicate that, using a metallic layer in the structure of the front mirror of such an interferometer and a diffraction effect, one can reduce the Ohmic loss by a factor of several tens in comparison with a continuous thin metallic film. (laser crystals and braggg ratings)
Estimating Net Primary Productivity Using Satellite and Ancillary Data
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Houser, Paul (Technical Monitor)
2001-01-01
The net primary productivity (C) or annual rate of carbon accumulation per unit ground area by terrestrial plant communities is the difference of the rate of gross photosynthesis (A(sub g)) and autotrophic respiration (R) per unit ground area. Although available observations show that R is a large and variable fraction of A(sub g), viz., 0.3 to 0.7, it is generally recognized that much uncertainties exist in this fraction due to difficulties associated with the needed measurements. Additional uncertainties arise when these measurements are extrapolated to regional or global land surface using empirical equations, for example, using regression equations relating C to mean annual precipitation and air temperature. Here, a process-based approach has been taken to calculate A(sub g) and R using satellite and ancillary data. A(sub g) has been expressed as a product of radiation use efficiency, magnitude of intercepted photosynthetically active radiation (PAR), and normalized by stresses due to soil water shortage and air temperature away from the optimum range. A biophysical model has been used to determine the radiation use efficiency from the maximum rate of carbon assimilation by a leaf, foliage temperature, and the fraction of diffuse PAR incident on a canopy. All meteorological data (PAR, air temperature, precipitation, etc.) needed for the calculation are derived from satellite observations, while a land use, land cover data (based on satellite and ground measurements) have been used to assess the maximum rate of carbon assimilation by a leaf of varied cover type based on field measurements. R has been calculated as the sum of maintenance and growth components. The maintenance respiration of foliage and live fine roots at a standard temperature of different land cover has been determined from their nitrogen content using field and satellite measurements, while that of living fraction of woody stem (viz., sapwood) from the seasonal maximum leaf area index as determined from satellite observations. These maintenance respiration values were then adjusted to that corresponding to air temperature according to a prescribed non-linear variation of respiration with temperature. The growth respiration has been calculated from the difference of Ag and maintenance respiration, according to the two-compartment model. The results of calculations will be reported for 36 consecutive months (1987-1989) over large contiguous areas (ca. 10(exp 5) sq km) Of agricultural land and tropical humid evergreen forests, and compared with available field data.
NASA Astrophysics Data System (ADS)
Penjweini, Rozhin; Kim, Michele M.; Ong, Yi Hong; Zhu, Timothy C.
2017-02-01
This preclinical study examines four dosimetric quantities (light fluence, photosensitizer photobleaching ratio, PDT dose, and reacted singlet oxygen ([1O2]rx)) to predict local control rate (LCR) for 2-(1-Hexyloxyethyl)-2-devinyl pyropheophorbide (HPPH)-mediated photodynamic therapy (PDT). Mice bearing radiation-induced fibrosarcoma (RIF) tumors were treated with different in-air fluences (135, 250 and 350 J/cm2) and in-air fluence rates (50, 75 and 150 mW/cm2) at 0.25 mg/kg HPPH and a drug-light interval of 24 hours using a 1 cm diameter collimated laser beam at 665 nm wavelength. A macroscopic model was used to calculate ([1O2]rx)) based on in vivo explicit dosimetry of the initial tissue oxygenation, photosensitizer concentration, and tissue optical properties. PDT dose was defined as a temporal integral of drug concentration and fluence rate (φ) at a 3 mm tumor depth. Light fluence rate was calculated throughout the treatment volume based on Monte-Carlo simulation and measured tissue optical properties. The tumor volume of each mouse was tracked for 30 days after PDT and Kaplan-Meier analyses for LCR were performed based on a tumor volume <=100 mm3, for four dose metrics: fluence, HPPH photobleaching rate, PDT dose, and ([1O2]rx)). The results of this study showed that ([1O2]rx)) is the best dosimetric quantity that can predict tumor response and correlate with LCR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguado, Alfredo; Roncero, Octavio; Zanchet, Alexandre
The impact of the photodissociation of HCN and HNC isomers is analyzed in different astrophysical environments. For this purpose, the individual photodissociation cross sections of HCN and HNC isomers have been calculated in the 7–13.6 eV photon energy range for a temperature of 10 K. These calculations are based on the ab initio calculation of three-dimensional adiabatic potential energy surfaces of the 21 lower electronic states. The cross sections are then obtained using a quantum wave packet calculation of the rotational transitions needed to simulate a rotational temperature of 10 K. The cross section calculated for HCN shows significant differencesmore » with respect to the experimental one, and this is attributed to the need to consider non-adiabatic transitions. Ratios between the photodissociation rates of HCN and HNC under different ultraviolet radiation fields have been computed by renormalizing the rates to the experimental value. It is found that HNC is photodissociated faster than HCN by a factor of 2.2 for the local interstellar radiation field and 9.2 for the solar radiation field, at 1 au. We conclude that to properly describe the HNC/HCN abundance ratio in astronomical environments illuminated by an intense ultraviolet radiation field, it is necessary to use different photodissociation rates for each of the two isomers, which are obtained by integrating the product of the photodissociation cross sections and ultraviolet radiation field over the relevant wavelength range.« less
Effect of wave function on the proton induced L XRP cross sections for {sub 62}Sm and {sub 74}W
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shehla,; Kaur, Rajnish; Kumar, Anil
The L{sub k}(k= 1, α, β, γ) X-ray production cross sections have been calculated for {sub 74}W and {sub 62}Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared withmore » the measured cross sections reported in the recent compilation to check the reliability of the calculated values.« less
Hysterectomy in Germany: a DRG-based nationwide analysis, 2005-2006.
Stang, Andreas; Merrill, Ray M; Kuss, Oliver
2011-07-01
Hysterectomy is among the more common surgical procedures in gynecology. The aim of this study was to calculate population-wide rates of hysterectomy across Germany and to obtain information on the different modalities of hysterectomy currently performed in German hospitals. This was done on the basis of nationwide DRG statistics (DRG = diagnosis-related groups) covering the years 2005-2006. We analyzed the nationwide DRG statistics for 2005 and 2006, in which we found 305 015 hysterectomies. Based on these data we calculated hysterectomy rates for the female population. We determined the indications for each hysterectomy with an algorithm based on the ICD-10 codes, and we categorized the operations on the basis of their OPS codes (OPS = Operationen- und Prozedurenschlüssel [Classification of Operations and Procedures]). The overall rate of hysterectomy in Germany was 362 per 100 000 person-years. 55% of hysterectomies for benign diseases of the female genital tract were performed transvaginally. Bilateral ovariectomy was performed concomitantly in 23% of all hysterectomies, while 4% of all hysterectomies were subtotal. Hysterectomy rates varied considerably across federal states: the rate for benign disease was lowest in Hamburg (213.8 per 100 000 women per year) and highest in Mecklenburg-West Pomerania (361.9 per 100 000 women per year). Hysterectomy rates vary markedly from one region to another. Moreover, even though recent studies have shown that bilateral ovariectomy is harmful to women under 50 who undergo hysterectomy for benign disease, it is still performed in 4% of all hysterectomies for benign indications in Germany.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)
2006-01-01
Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.
Continuous subcutaneous insulin infusion: Special needs for children.
Adolfsson, Peter; Ziegler, Ralph; Hanas, Ragnar
2017-06-01
Continuous subcutaneous insulin infusion (CSII) is a very common therapy for children with type 1 diabetes. Due to physiological differences they have other requirements for their insulin pump than adults. The main difference is the need for very low basal rates. Even though most available insulin pumps reach a high accuracy at usual basal rates, accuracy decreases for lower rates. In addition, the lowest delivered amount at 1 time is limiting the fine tuning of the basal rate as well as the option for temporary basal rates. Alarms in case of occlusions depend on accumulation of a certain amount of insulin in the catheter, and therefore the time until such an alarm is triggered is much longer with lower basal rates. Accordingly, the risk for hyperglycemia developing into diabetic ketoacidosis increases. The availability of bolus advisors facilitates the calculation of meal and correction boluses for children and their parents. However, there are some differences between the calculators, and the settings that the calculation is based on are very important. Better connectivity, for example with a system for continuous glucose monitoring, might help to further increase safety in the use of CSII in children. When selecting an insulin pump for a child, the features and characteristics of available pumps should be properly compared to ensure an effective and safe therapy. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
7 CFR 760.811 - Rates and yields; calculating payments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...
7 CFR 760.811 - Rates and yields; calculating payments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...
7 CFR 760.811 - Rates and yields; calculating payments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...
7 CFR 760.811 - Rates and yields; calculating payments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...
7 CFR 760.811 - Rates and yields; calculating payments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... from NASS or other sources approved by FSA that show there is a significant difference in yield or value based on a distinct and separate end use of the crop. Despite potential differences in yield or...
2018-05-01
the descriptors were correlated to experimental rate constants. The five descriptors fell into one of two categories: whole molecule descriptors or...model based on these correlations . Although that goal was not achieved in full, considerable progress has been made, and there is potential for a...readme.txt) and compiled. We then searched for correlations between the calculated properties from theory and the experimental measurements of reaction rate
Tsai, Charlie; Lee, Kyoungjin; Yoo, Jong Suk; ...
2016-02-16
Density functional theory calculations are used to investigate thermal water decomposition over the close-packed (111), stepped (211), and open (100) facets of transition metal surfaces. A descriptor-based approach is used to determine that the (211) facet leads to the highest possible rates. As a result, a range of 96 binary alloys were screened for their potential activity and a rate control analysis was performed to assess how the overall rate could be improved.
NASA Technical Reports Server (NTRS)
James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.
1988-01-01
Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Sintering activation energy MoSi2-WSi2-Si3N4 ceramic
NASA Astrophysics Data System (ADS)
Titov, D. D.; Lysenkov, A. S.; Kargin, Yu F.; Frolova, M. G.; Gorshkov, V. A.; Perevislov, S. N.
2018-04-01
The activation energy of sintering process was calculated based on dilatometric studies of shrinkage processes (Mo,W)Si2 + Si3N4 composite ceramic. (Mo,W)Si2 powders was obtained by solid-phase solutions of 70 wt% MoSi2 and 30 wt% WSi2 by SHS in the ISMAN RAS. The concentration rate Si3N4 was from 1 to 15 wt.%. The sintering was carried out to 1850°C in Ar atmosphere the heating rate of 5, 10, 12 and 15°C/min by the way of dilatometer tests. Based on the differential kinetic analysis method (Friedman’s method), the sintering process activation energy of (Mo,W)Si2 + Si3N4 were calculated. The two-stage sintering process and the dependence of the activation energy on the Si3N4 content was shown. Average value of 370 kJ/mol for Q was obtained.
Analysis of the influence of advanced materials for aerospace products R&D and manufacturing cost
NASA Astrophysics Data System (ADS)
Shen, A. W.; Guo, J. L.; Wang, Z. J.
2015-12-01
In this paper, we pointed out the deficiency of traditional cost estimation model about aerospace products Research & Development (R&D) and manufacturing based on analyzing the widely use of advanced materials in aviation products. Then we put up with the estimating formulas of cost factor, which representing the influences of advanced materials on the labor cost rate and manufacturing materials cost rate. The values ranges of the common advanced materials such as composite materials, titanium alloy are present in the labor and materials two aspects. Finally, we estimate the R&D and manufacturing cost of F/A-18, F/A- 22, B-1B and B-2 aircraft based on the common DAPCA IV model and the modified model proposed by this paper. The calculation results show that the calculation precision improved greatly by the proposed method which considering advanced materials. So we can know the proposed method is scientific and reasonable.
Research on fully distributed optical fiber sensing security system localization algorithm
NASA Astrophysics Data System (ADS)
Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen
2013-12-01
A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.
Predictive onboard flow control for packet switching satellites
NASA Technical Reports Server (NTRS)
Bobinsky, Eric A.
1992-01-01
We outline two alternate approaches to predicting the onset of congestion in a packet switching satellite, and argue that predictive, rather than reactive, flow control is necessary for the efficient operation of such a system. The first method discussed is based on standard, statistical techniques which are used to periodically calculate a probability of near-term congestion based on arrival rate statistics. If this probability exceeds a present threshold, the satellite would transmit a rate-reduction signal to all active ground stations. The second method discussed would utilize a neural network to periodically predict the occurrence of buffer overflow based on input data which would include, in addition to arrival rates, the distributions of packet lengths, source addresses, and destination addresses.
49 CFR 1141.1 - Procedures to calculate interest rates.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the portion of the year covered by the interest rate. A simple multiplication of the nominal rate by... 49 Transportation 8 2010-10-01 2010-10-01 false Procedures to calculate interest rates. 1141.1... TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE PROCEDURES TO CALCULATE INTEREST RATES...
Yakimov, Eugene B
2016-06-01
An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.
Infiltration modeling guidelines for commercial building energy analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.
This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less
Sun, Hongyan; Law, Chung K
2007-05-17
The reaction kinetics for the thermal decomposition of monomethylhydrazine (MMH) was studied with quantum Rice-Ramsperger-Kassel (QRRK) theory and a master equation analysis for pressure falloff. Thermochemical properties were determined by ab initio and density functional calculations. The entropies, S degrees (298.15 K), and heat capacities, Cp degrees (T) (0 < or = T/K < or = 1500), from vibrational, translational, and external rotational contributions were calculated using statistical mechanics based on the vibrational frequencies and structures obtained from the density functional study. Potential barriers for internal rotations were calculated at the B3LYP/6-311G(d,p) level, and hindered rotational contributions to S degrees (298.15 K) and Cp degrees (T) were calculated by solving the Schrödinger equation with free rotor wave functions, and the partition coefficients were treated by direct integration over energy levels of the internal rotation potentials. Enthalpies of formation, DeltafH degrees (298.15 K), for the parent MMH (CH3NHNH2) and its corresponding radicals CH3N*NH2, CH3NHN*H, and C*H2NHNH2 were determined to be 21.6, 48.5, 51.1, and 62.8 kcal mol(-1) by use of isodesmic reaction analysis and various ab initio methods. The kinetic analysis of the thermal decomposition, abstraction, and substitution reactions of MMH was performed at the CBS-QB3 level, with those of N-N and C-N bond scissions determined by high level CCSD(T)/6-311++G(3df,2p)//MPWB1K/6-31+G(d,p) calculations. Rate constants of thermally activated MMH to dissociation products were calculated as functions of pressure and temperature. An elementary reaction mechanism based on the calculated rate constants, thermochemical properties, and literature data was developed to model the experimental data on the overall MMH thermal decomposition rate. The reactions of N-N and C-N bond scission were found to be the major reaction paths for the modeling of MMH homogeneous decomposition at atmospheric conditions.
Gucciardi, Enza; Chan, Vivian Wing-Sheung; Manuel, Lisa; Sidani, Souraya
2013-08-01
This systematic literature review aims to identify diabetes self-management education (DSME) features to improve diabetes education for Black African/Caribbean and Hispanic/Latin American women with Type 2 diabetes mellitus. We conducted a literature search in six health databases for randomized controlled trials and comparative studies. Success rates of intervention features were calculated based on effectiveness in improving glycosolated hemoglobin (HbA1c), anthropometrics, physical activity, or diet outcomes. Calculations of rate differences assessed whether an intervention feature positively or negatively affected an outcome. From 13 studies included in our analysis, we identified 38 intervention features in relation to their success with an outcome. Five intervention features had positive rate differences across at least three outcomes: hospital-based interventions, group interventions, the use of situational problem-solving, frequent sessions, and incorporating dietitians as interventionists. Six intervention features had high positive rate differences (i.e. ≥50%) on specific outcomes. Different DSME intervention features may influence broad and specific self-management outcomes for women of African/Caribbean and Hispanic/Latin ethnicity. With the emphasis on patient-centered care, patients and care providers can consider options based on DSME intervention features for its broad and specific impact on outcomes to potentially make programming more effective. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie
2018-05-01
One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.
Center for Research on Infrared Detectors (CENTROID)
2006-09-30
calculations to reevaluate the band-to-band Auger-1lifetime inn-type LWIR HgCdTe because the Auger-1lifetime can be measured in long-wavelength...infrared ( LWIR ) HgCdTe. Our calculations of the electronic band structure are based on a fourteen-band bulk basis, including spin-orbit splitting. The...within better than a factor of two between theoretically and experimentally determined Auger rates for a wide variety of MWIR and LWIR superlattices
Energy barriers and rates of tautomeric transitions in DNA bases: ab initio quantum chemical study.
Basu, Soumalee; Majumdar, Rabi; Das, Gourab K; Bhattacharyya, Dhananjay
2005-12-01
Tautomeric transitions of DNA bases are proton transfer reactions, which are important in biology. These reactions are involved in spontaneous point mutations of the genetic material. In the present study, intrinsic reaction coordinates (IRC) analyses through ab initio quantum chemical calculations have been carried out for the individual DNA bases A, T, G, C and also A:T and G:C base pairs to estimate the kinetic and thermodynamic barriers using MP2/6-31G** method for tautomeric transitions. Relatively higher values of kinetic barriers (about 50-60 kcal/mol) have been observed for the single bases, indicating that tautomeric alterations of isolated single bases are quite unlikely. On the other hand, relatively lower values of the kinetic barriers (about 20-25 kcal/mol) for the DNA base pairs A:T and G:C clearly suggest that the tautomeric shifts are much more favorable in DNA base pairs than in isolated single bases. The unusual base pairing A':C, T':G, C':A or G':T in the daughter DNA molecule, resulting from a parent DNA molecule with tautomeric shifts, is found to be stable enough to result in a mutation. The transition rate constants for the single DNA bases in addition to the base pairs are also calculated by computing the free energy differences between the transition states and the reactants.
Richardson, Magnus J E
2007-08-01
Integrate-and-fire models are mainstays of the study of single-neuron response properties and emergent states of recurrent networks of spiking neurons. They also provide an analytical base for perturbative approaches that treat important biological details, such as synaptic filtering, synaptic conductance increase, and voltage-activated currents. Steady-state firing rates of both linear and nonlinear integrate-and-fire models, receiving fluctuating synaptic drive, can be calculated from the time-independent Fokker-Planck equation. The dynamic firing-rate response is less easy to extract, even at the first-order level of a weak modulation of the model parameters, but is an important determinant of neuronal response and network stability. For the linear integrate-and-fire model the response to modulations of current-based synaptic drive can be written in terms of hypergeometric functions. For the nonlinear exponential and quadratic models no such analytical forms for the response are available. Here it is demonstrated that a rather simple numerical method can be used to obtain the steady-state and dynamic response for both linear and nonlinear models to parameter modulation in the presence of current-based or conductance-based synaptic fluctuations. To complement the full numerical solution, generalized analytical forms for the high-frequency response are provided. A special case is also identified--time-constant modulation--for which the response to an arbitrarily strong modulation can be calculated exactly.
Modeling Future Fire danger over North America in a Changing Climate
NASA Astrophysics Data System (ADS)
Jain, P.; Paimazumder, D.; Done, J.; Flannigan, M.
2016-12-01
Fire danger ratings are used to determine wildfire potential due to weather and climate factors. The Fire Weather Index (FWI), part of the Canadian Forest Fire Danger Rating System (CFFDRS), incorporates temperature, relative humidity, windspeed and precipitation to give a daily fire danger rating that is used by wildfire management agencies in an operational context. Studies using GCM output have shown that future wildfire danger will increase in a warming climate. However, these studies are somewhat limited by the coarse spatial resolution (typically 100-400km) and temporal resolution (typically 6-hourly to monthly) of the model output. Future wildfire potential over North America based on FWI is calculated using output from the Weather, Research and Forecasting (WRF) model, which is used to downscale future climate scenarios from the bias-corrected Community Climate System Model (CCSM) under RCP8.5 scenarios at a spatial resolution of 36km. We consider five eleven year time slices: 1990-2000, 2020-2030, 2030-2040, 2050-2060 and 2080-2090. The dynamically downscaled simulation improves determination of future extreme weather by improving both spatial and temporal resolution over most GCM models. To characterize extreme fire weather we calculate annual numbers of spread days (days for which FWI > 19) and annual 99th percentile of FWI. Additionally, an extreme value analysis based on the peaks-over-threshold method allows us to calculate the return values for extreme FWI values.
NASA Astrophysics Data System (ADS)
Xu, Yi; Luo, Wen; Balabanski, Dimiter; Goriely, Stephane; Matei, Catalin; Tesileanu, Ovidiu
2017-09-01
The astrophysical p-process is an important way of nucleosynthesis to produce the stable and proton-rich nuclei beyond Fe which can not be reached by the s- and r-processes. In the present study, the astrophysical reaction rates of (γ,n), (γ,p), and (γ,α) reactions are computed within the modern reaction code TALYS for about 3000 stable and proton-rich nuclei with 12 < Z < 110. The nuclear structure ingredients involved in the calculation are determined from experimental data whenever available and, if not, from global microscopic nuclear models. In particular, both of the Wood-Saxon potential and the double folding potential with density dependent M3Y (DDM3Y) effective interaction are used for the calculations. It is found that the photonuclear reaction rates are very sensitive to the nuclear potential, and the better determination of nuclear potential would be important to reduce the uncertainties of reaction rates. Meanwhile, the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) facility is being developed, which will provide the great opportunity to experimentally study the photonuclear reactions in p-process. Simulations of the experimental setup for the measurements of the photonuclear reactions 96Ru(γ,p) and 96Ru(γ,α) are performed. It is shown that the experiments of photonuclear reactions in p-process based on ELI-NP are quite promising.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, C.; Potts, I.; Reeks, M. W., E-mail: mike.reeks@ncl.ac.uk
We present a simple stochastic quadrant model for calculating the transport and deposition of heavy particles in a fully developed turbulent boundary layer based on the statistics of wall-normal fluid velocity fluctuations obtained from a fully developed channel flow. Individual particles are tracked through the boundary layer via their interactions with a succession of random eddies found in each of the quadrants of the fluid Reynolds shear stress domain in a homogeneous Markov chain process. In this way, we are able to account directly for the influence of ejection and sweeping events as others have done but without resorting tomore » the use of adjustable parameters. Deposition rate predictions for a wide range of heavy particles predicted by the model compare well with benchmark experimental measurements. In addition, deposition rates are compared with those obtained from continuous random walk models and Langevin equation based ejection and sweep models which noticeably give significantly lower deposition rates. Various statistics related to the particle near wall behavior are also presented. Finally, we consider the model limitations in using the model to calculate deposition in more complex flows where the near wall turbulence may be significantly different.« less
Beta decay rates of neutron-rich nuclei
NASA Astrophysics Data System (ADS)
Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel
2015-10-01
Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei.
Integral processing in beyond-Hartree-Fock calculations
NASA Technical Reports Server (NTRS)
Taylor, P. R.
1986-01-01
The increasing rate at which improvements in processing capacity outstrip improvements in input/output performance of large computers has led to recent attempts to bypass generation of a disk-based integral file. The direct self-consistent field (SCF) method of Almlof and co-workers represents a very successful implementation of this approach. This paper is concerned with the extension of this general approach to configuration interaction (CI) and multiconfiguration-self-consistent field (MCSCF) calculations. After a discussion of the particular types of molecular orbital (MO) integrals for which -- at least for most current generation machines -- disk-based storage seems unavoidable, it is shown how all the necessary integrals can be obtained as matrix elements of Coulomb and exchange operators that can be calculated using a direct approach. Computational implementations of such a scheme are discussed.
Dose rate evaluation of workers on the operation floor in Fukushima-Daiichi Unit 3
NASA Astrophysics Data System (ADS)
Matsushita, Kaoru; Kurosawa, Masahiko; Shirai, Keisuke; Matsuoka, Ippei; Mukaida, Naoki
2017-09-01
At Fukushima Daiichi Nuclear Power Plant Unit 3, installation of a fuel handling machine is planned to support the removal of spent fuel. The dose rates at the workplace were calculated based on the source distribution measured using a collimator in order to confirm that the dose rates on the operation floor were within a manageable range. It was confirmed that the accuracy of the source distribution was C/M = 1.0-2.4. These dose rates were then used to plan the work on the operation floor.
Freight Calculation Model: A Case Study of Coal Distribution
NASA Astrophysics Data System (ADS)
Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.
2018-03-01
Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.
Comparison of methods for H*(10) calculation from measured LaBr3(Ce) detector spectra.
Vargas, A; Cornejo, N; Camp, A
2018-07-01
The Universitat Politecnica de Catalunya (UPC) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) have evaluated methods based on stripping, conversion coefficients and Maximum Likelihood Estimation using Expectation Maximization (ML-EM) in calculating the H*(10) rates from photon pulse-height spectra acquired with a spectrometric LaBr 3 (Ce)(1.5″ × 1.5″) detector. There is a good agreement between results of the different H*(10) rate calculation methods using the spectra measured at the UPC secondary standard calibration laboratory in Barcelona. From the outdoor study at ESMERALDA station in Madrid, it can be concluded that the analysed methods provide results quite similar to those obtained with the reference RSS ionization chamber. In addition, the spectrometric detectors can also facilitate radionuclide identification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Using a Calculated Pulse Rate with an Artificial Neural Network to Detect Irregular Interbeats.
Yeh, Bih-Chyun; Lin, Wen-Piao
2016-03-01
Heart rate is an important clinical measure that is often used in pathological diagnosis and prognosis. Valid detection of irregular heartbeats is crucial in the clinical practice. We propose an artificial neural network using the calculated pulse rate to detect irregular interbeats. The proposed system measures the calculated pulse rate to determine an "irregular interbeat on" or "irregular interbeat off" event. If an irregular interbeat is detected, the proposed system produces a danger warning, which is helpful for clinicians. If a non-irregular interbeat is detected, the proposed system displays the calculated pulse rate. We include a flow chart of the proposed software. In an experiment, we measure the calculated pulse rates and achieve an error percentage of < 3% in 20 participants with a wide age range. When we use the calculated pulse rates to detect irregular interbeats, we find such irregular interbeats in eight participants.
Use of erroneous wolf generation time in assessments of domestic dog and human evolution
Mech, L. David; Barber-Meyer, Shannon
2017-01-01
Scientific interest in dog domestication and parallel evolution of dogs and humans (Wang et al. 2013) has increased recently (Freedman et al. 2014, Larson and Bradley 2014, Franz et al. 2016,), and various important conclusions have been drawn based on how long ago the calculations show dogs were domesticated from ancestral wolves (Canis lupus). Calculation of this duration is based on “the most commonly assumed mutation rate of 1 x 10-8 per generation and a 3-year gray wolf generation time . . .” (Skoglund et al. 2015:3). It is unclear on what information the assumed generation time is based, but Ersmark et al. (2016) seemed to have based their assumption on a single wolf (Mech and Seal 1987). The importance of assuring that such assumptions are valid is obvious. Recently, two independent studies employing three large data sets and three methods from two widely separated areas have found that wolf generation time is 4.2-4.7 years. The first study, based on 200 wolves in Yellowstone National Park used age-specific birth and death rates to calculate a generation time of 4.16 years (vonHoldt et al. 2008). The second, using estimated first-breeding times of 86 female wolves in northeastern Minnesota found a generation time of 4.3 years and using uterine examination of 159 female wolves from throughout Minnesota yielded a generation time of 4.7 years (Mech et al. 2016). We suggest that previous studies using a 3-year generation time recalculate their figures and adjust their conclusions based on these generation times and publish revised results.
Stock price prediction using geometric Brownian motion
NASA Astrophysics Data System (ADS)
Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM
2018-03-01
Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.
NASA Technical Reports Server (NTRS)
Milynczak, Martin G.
1991-01-01
The conversion of chemical potential energy and infrared radiative energy to kinetic energy by non-LTE processes involving ozone is a potentially significant source of heat in the terrestrial upper mesosphere and lower thermosphere. Heating rates are calculated and compared using two different statistical equilibrium models previously applied in the analysis of measurements of limb emission from ozone. The calculated heating depends strongly on the assumed distribution and relaxation of energy in the quasi-nascent ozone molecule. Finally, in the absence of a detailed data base of rate coefficients it may be possible to estimate the heating rate due to non-LTE processes in ozone from appropriate satellite measurements of the ozone concentration and of the infrared emission from ozone in the 9-12 micron spectral interval.
Algorithms for the Computation of Debris Risk
NASA Technical Reports Server (NTRS)
Matney, Mark J.
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of satellites. A number of tools have been developed in NASA’s Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA’s Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper presents an introduction to these algorithms and the assumptions upon which they are based.
Algorithms for the Computation of Debris Risks
NASA Technical Reports Server (NTRS)
Matney, Mark
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.
Determining Greenland Ice Sheet Accumulation Rates from Radar Remote Sensing
NASA Technical Reports Server (NTRS)
Jezek, Kenneth C.
2002-01-01
An important component of NASA's Program for Arctic Regional Climate Assessment (PARCA) is a mass balance investigation of the Greenland Ice Sheet. The mass balance is calculated by taking the difference between the areally Integrated snow accumulation and the net ice discharge of the ice sheet. Uncertainties in this calculation Include the snow accumulation rate, which has traditionally been determined by interpolating data from ice core samples taken from isolated spots across the ice sheet. The sparse data associated with ice cores juxtaposed against the high spatial and temporal resolution provided by remote sensing , has motivated scientists to investigate relationships between accumulation rate and microwave observations as an option for obtaining spatially contiguous estimates. The objective of this PARCA continuation proposal was to complete an estimate of surface accumulation rate on the Greenland Ice Sheet derived from C-band radar backscatter data compiled in the ERS-1 SAR mosaic of data acquired during, September-November, 1992. An empirical equation, based on elevation and latitude, is used to determine the mean annual temperature. We examine the influence of accumulation rate, and mean annual temperature on C-band radar backscatter using a forward model, which incorporates snow metamorphosis and radar backscatter components. Our model is run over a range of accumulation and temperature conditions. Based on the model results, we generate a look-up table, which uniquely maps the measured radar backscatter, and mean annual temperature to accumulation rate. Our results compare favorably with in situ accumulation rate measurements falling within our study area.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
NASA Astrophysics Data System (ADS)
Teitelbaum, Heshel; Caridade, Pedro J. S. B.; Varandas, António J. C.
2004-06-01
Classical trajectory calculations using the MERCURY/VENUS code have been carried out on the H+O2 reactive system using the DMBE-IV potential energy surface. The vibrational quantum number and the temperature were selected over the ranges v=0 to 15, and T=300 to 10 000 K, respectively. All other variables were averaged. Rate constants were determined for the energy transfer process, H+O2(v)-->H+O2(v''), for the bimolecular exchange process, H+O2(v)-->OH(v')+O, and for the dissociative process, H+O2(v)-->H+O+O. The dissociative process appears to be a mere extension of the process of transferring large amounts of energy. State-to-state rate constants are given for the exchange reaction, and they are in reasonable agreement with previous results, while the energy transfer and dissociative rate constants have never been reported previously. The lifetime distributions of the HO2 complex, calculated as a function of v and temperature, were used as a basis for determining the relative contributions of various vibrational states of O2 to the thermal rate coefficients for recombination at various pressures. This novel approach, based on the complex's ability to survive until it collides in a secondary process with an inert gas, is used here for the first time. Complete falloff curves for the recombination of H+O2 are also calculated over a wide range of temperatures and pressures. The combination of the two separate studies results in pressure- and temperature-dependent rate constants for H+O2(v)(+Ar)⇄HO2(+Ar). It is found that, unlike the exchange reaction, vibrational and rotational-translational energy are liabilities in promoting recombination.
Energy balance in the core of the Saturn plasma sheet: H2O chemistry
NASA Astrophysics Data System (ADS)
Shemansky, D. E.; Yoshii, J.; Liu, X.
2011-10-01
A model of the weakly ionized plasma at Saturn has been developed to investigate the properties of the system. Energy balance is a critical consideration. The present model is based on two sources of mass, H2O, and HI. H2O is a variable. HI is a significant volume of gas flowing through the plasma imposed by the source at Saturn [1,2,3]. The energy sources are solar radiation and heterogeneous magnetosphere electrons. The model calculations produce energy rates, species partitioning, and relaxation lifetimes. For the first time the state of the ambient plasma sheet electrons is directly connected to the energy forcing functions. Within limits of knowledge, the predicted state of the core region of the plasma sheet in neutral and ionized gas corresponds satisfactorily to observation. The dominant ions in these calculations are H2O+ and H3O+ with lifetimes of several days. The lifetime of H2O is roughly 60 days. In calculations carried out so far the predicted source rate for H2O is lower than the rates quoted from the Enceladus encounters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, A.; Chadwick, T.; Makhlouf, M.
This paper deals with the effects of various solidification variables such as cooling rate, temperature gradient, solidification rate, etc. on the microstructure and shrinkage defects in aluminum alloy (A356) castings. The effects are first predicted using commercial solidification modeling softwares and then verified experimentally. For this work, the authors are considering a rectangular bar cast in a sand mold. Simulation is performed using SIMULOR, a finite volume based casting simulation program. Microstructural variables such as dendritic arm spacing (DAS) and defects (percentage porosity) are calculated from the temperature fields, cooling rate, solidification time, etc. predicted by the computer softwares. Themore » same variables are then calculated experimentally in the foundry. The test piece is cast in a resin (Sodium Silicate) bonded sand mold and the DAS and porosity variables are calculated using Scanning Electron Microscopy and Image Analysis. The predictions from the software are compared with the experimental results. The results are presented and critically analyzed to determine the quality of the predicted results. The usefulness of the commercial solidification modeling softwares as a tool for the foundry are also discussed.« less
Protein Degradation Rate in Arabidopsis thaliana Leaf Growth and Development[OPEN
Nelson, Clark J.; Castleden, Ian
2017-01-01
We applied 15N labeling approaches to leaves of the Arabidopsis thaliana rosette to characterize their protein degradation rate and understand its determinants. The progressive labeling of new peptides with 15N and measuring the decrease in the abundance of >60,000 existing peptides over time allowed us to define the degradation rate of 1228 proteins in vivo. We show that Arabidopsis protein half-lives vary from several hours to several months based on the exponential constant of the decay rate for each protein. This rate was calculated from the relative isotope abundance of each peptide and the fold change in protein abundance during growth. Protein complex membership and specific protein domains were found to be strong predictors of degradation rate, while N-end amino acid, hydrophobicity, or aggregation propensity of proteins were not. We discovered rapidly degrading subunits in a variety of protein complexes in plastids and identified the set of plant proteins whose degradation rate changed in different leaves of the rosette and correlated with leaf growth rate. From this information, we have calculated the protein turnover energy costs in different leaves and their key determinants within the proteome. PMID:28138016
Overlay design method based on visual pavement distress.
DOT National Transportation Integrated Search
1978-01-01
A method for designing the thickness of overlays for bituminous concrete pavements in Virginia is described. In this method the thickness is calculated by rating the amount and severity of observed pavement distress and determining the total accumula...
Comparison of three methods of calculating strain in the mouse ulna in exogenous loading studies.
Norman, Stephanie C; Wagner, David W; Beaupre, Gary S; Castillo, Alesha B
2015-01-02
Axial compression of mouse limbs is commonly used to induce bone formation in a controlled, non-invasive manner. Determination of peak strains caused by loading is central to interpreting results. Load-strain calibration is typically performed using uniaxial strain gauges attached to the diaphyseal, periosteal surface of a small number of sacrificed animals. Strain is measured as the limb is loaded to a range of physiological loads known to be anabolic to bone. The load-strain relationship determined by this subgroup is then extrapolated to a larger group of experimental mice. This method of strain calculation requires the challenging process of strain gauging very small bones which is subject to variability in placement of the strain gauge. We previously developed a method to estimate animal-specific periosteal strain during axial ulnar loading using an image-based computational approach that does not require strain gauges. The purpose of this study was to compare the relationship between load-induced bone formation rates and periosteal strain at ulnar midshaft using three different methods to estimate strain: (A) Nominal strain values based solely on load-strain calibration; (B) Strains calculated from load-strain calibration, but scaled for differences in mid-shaft cross-sectional geometry among animals; and (C) An alternative image-based computational method for calculating strains based on beam theory and animal-specific bone geometry. Our results show that the alternative method (C) provides comparable correlation between strain and bone formation rates in the mouse ulna relative to the strain gauge-dependent methods (A and B), while avoiding the need to use strain gauges. Published by Elsevier Ltd.
Hassouna, Ashraf H; Bahadur, Yasir A; Constantinescu, Camelia; El Sayed, Mohamed E; Naseem, Hussain; Naga, Adly F
2011-01-01
To investigate the correlation between the dose predicted by the treatment planning system using digitally reconstructed radiographs or three-dimensional (3D)-reconstructed CT images and the dose measured by semiconductor detectors, under clinical conditions of high-dose-rate brachytherapy of the cervix uteri. Thirty-two intracavitary brachytherapy applications were performed for 12 patients with cancer of the cervix uteri. The prescribed dose to Point A was 7 Gy. Dose was calculated for both International Commissioning on Radiation Units and Measurements (ICRU) bladder and rectal points based on digitally reconstructed radiographs and for 3D CT images-based volumetric calculation of the bladder and rectum. In vivo diode dosimetry was performed for the bladder and rectum. The ICRU reference point and the volumes of 1, 2, and 5cm(3) received 3.6±0.9, 5.6±2.0, 5.1±1.7, 4.3±1.4 and 5.0±1.2, 5.3±1.3, 4.9±1.1, and 4.2±0.9 Gy for the bladder and rectum, respectively. The ratio of the 1cm(3) and the ICRU reference point dose to the diode dose was 1.8±0.7 and 1.2±0.5 for the bladder and 1.9±0.6 and 1.7±0.5 for the rectum, respectively. 3D image-based dose calculation is the most accurate and reliable method to evaluate the dose given to critical organs. In vivo diode dosimetry is an important method of quality assurance, but clinical decisions should be made based on 3D-reconstructed CT image calculations. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Han, Jongil; Arya, S. Pal; Shaohua, Shen; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2000-01-01
Algorithms are developed to extract atmospheric boundary layer profiles for turbulence kinetic energy (TKE) and energy dissipation rate (EDR), with data from a meteorological tower as input. The profiles are based on similarity theory and scalings for the atmospheric boundary layer. The calculated profiles of EDR and TKE are required to match the observed values at 5 and 40 m. The algorithms are coded for operational use and yield plausible profiles over the diurnal variation of the atmospheric boundary layer.
Anharmonic quantum contribution to vibrational dephasing.
Barik, Debashis; Ray, Deb Shankar
2004-07-22
Based on a quantum Langevin equation and its corresponding Hamiltonian within a c-number formalism we calculate the vibrational dephasing rate of a cubic oscillator. It is shown that leading order quantum correction due to anharmonicity of the potential makes a significant contribution to the rate and the frequency shift. We compare our theoretical estimates with those obtained from experiments for small diatomics N(2), O(2), and CO.
ERIC Educational Resources Information Center
Horn, Kimberly; Dino, Geri; Kalsekar, Iftekhar; Mody, Reema
2005-01-01
This review summarizes end-of-program quit rates from 6 controlled and 10 field-based Not on Tobacco (NOT) evaluations. Approximately 6,130 youth from 5 states and 489 schools participated. Intent-to-treat and compliant quit rates were calculated at 3 months postbaseline (end-of-program). Results from controlled evaluations revealed an aggregate…
Sidi, Avner; Gravenstein, Nikolaus; Vasilopoulos, Terrie; Lampotang, Samsun
2017-06-02
We describe observed improvements in nontechnical or "higher-order" deficiencies and cognitive performance skills in an anesthesia residency cohort for a 1-year time interval. Our main objectives were to evaluate higher-order, cognitive performance and to demonstrate that simulation can effectively serve as an assessment of cognitive skills and can help detect "higher-order" deficiencies, which are not as well identified through more traditional assessment tools. We hypothesized that simulation can identify longitudinal changes in cognitive skills and that cognitive performance deficiencies can then be remediated over time. We used 50 scenarios evaluating 35 residents during 2 subsequent years, and 18 of those 35 residents were evaluated in both years (post graduate years 3 then 4) in the same or similar scenarios. Individual basic knowledge and cognitive performance during simulation-based scenarios were assessed using a 20- to 27-item scenario-specific checklist. Items were labeled as basic knowledge/technical (lower-order cognition) or advanced cognitive/nontechnical (higher-order cognition). Identical or similar scenarios were repeated annually by a subset of 18 residents during 2 successive academic years. For every scenario and item, we calculated group error scenario rate (frequency) and individual (resident) item success. Grouped individuals' success rates are calculated as mean (SD), and item success grade and group error rates are calculated and presented as proportions. For all analyses, α level is 0.05. Overall PGY4 residents' error rates were lower and success rates higher for the cognitive items compared with technical item performance in the operating room and resuscitation domains. In all 3 clinical domains, the cognitive error rate by PGY4 residents was fairly low (0.00-0.22) and the cognitive success rate by PGY4 residents was high (0.83-1.00) and significantly better compared with previous annual assessments (P < 0.05). Overall, there was an annual decrease in error rates for 2 years, primarily driven by decreases in cognitive errors. The most commonly observed cognitive error types remained anchoring, availability bias, premature closure, and confirmation bias. Simulation-based assessments can highlight cognitive performance areas of relative strength, weakness, and progress in a resident or resident cohort. We believe that they can therefore be used to inform curriculum development including activities that require higher-level cognitive processing.
Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions
NASA Astrophysics Data System (ADS)
Nabi, Jameel-Un; Böyükata, Mahmut
2017-01-01
The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.
Butterbaugh, Grant; Olejniczak, Piotr; Roques, Betsy; Costa, Richard; Rose, Marcy; Fisch, Bruce; Carey, Michael; Thomson, Jessica; Skinner, John
2004-08-01
Epilepsy research has identified higher rates of learning disorders in patients with temporal lobe epilepsy (TLE). However, most studies have not adequately assessed complex functional adult learning skills, such as reading comprehension and written language. We designed this study to evaluate our predictions that higher rates of reading comprehension, written language, and calculation disabilities would be associated with left TLE versus right TLE. Reading comprehension, written language, and calculation skills were assessed by using selected subtests from the Woodcock-Johnson Psycho-Educational Tests of Achievement-Revised in a consecutive series of 31 presurgical patients with TLE. Learning disabilities were defined by one essential criterion consistent with the Americans with Disabilities Act of 1990. Patients had left hemisphere language dominance based on Wada results, left or right TLE based on inpatient EEG monitoring, and negative magnetic resonance imaging (MRI), other than MRI correlates of mesial temporal sclerosis. Higher rates of reading comprehension, written language, and calculation disabilities were associated with left TLE, as compared with right TLE. Nearly 75% of patients with left TLE, whereas fewer than 10% of those with right TLE, had at least one learning disability. Seizure onset in the language-dominant hemisphere, as compared with the nondominant hemisphere, was associated with higher rates of specific learning disabilities and a history of poor literacy or career development or both. These results support the potential clinical benefits of using lateralization of seizure onset as a predictor of the risk of learning disabilities that, once evaluated, could be accommodated to increase the participation of patients with epilepsy in work and educational settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kojima, H.; Yamada, A.; Okazaki, S., E-mail: okazaki@apchem.nagoya-u.ac.jp
2015-05-07
The intramolecular proton transfer reaction of malonaldehyde in neon solvent has been investigated by mixed quantum–classical molecular dynamics (QCMD) calculations and fully classical molecular dynamics (FCMD) calculations. Comparing these calculated results with those for malonaldehyde in water reported in Part I [A. Yamada, H. Kojima, and S. Okazaki, J. Chem. Phys. 141, 084509 (2014)], the solvent dependence of the reaction rate, the reaction mechanism involved, and the quantum effect therein have been investigated. With FCMD, the reaction rate in weakly interacting neon is lower than that in strongly interacting water. However, with QCMD, the order of the reaction rates ismore » reversed. To investigate the mechanisms in detail, the reactions were categorized into three mechanisms: tunneling, thermal activation, and barrier vanishing. Then, the quantum and solvent effects were analyzed from the viewpoint of the reaction mechanism focusing on the shape of potential energy curve and its fluctuations. The higher reaction rate that was found for neon in QCMD compared with that found for water solvent arises from the tunneling reactions because of the nearly symmetric double-well shape of the potential curve in neon. The thermal activation and barrier vanishing reactions were also accelerated by the zero-point energy. The number of reactions based on these two mechanisms in water was greater than that in neon in both QCMD and FCMD because these reactions are dominated by the strength of solute–solvent interactions.« less
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2014 CFR
2014-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2013 CFR
2013-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
Axisymmetric computational fluid dynamics analysis of Saturn V/S1-C/F1 nozzle and plume
NASA Technical Reports Server (NTRS)
Ruf, Joseph H.
1993-01-01
An axisymmetric single engine Computational Fluid Dynamics calculation of the Saturn V/S 1-C vehicle base region and F1 engine plume is described. There were two objectives of this work, the first was to calculate an axisymmetric approximation of the nozzle, plume and base region flow fields of S1-C/F1, relate/scale this to flight data and apply this scaling factor to a NLS/STME axisymmetric calculations from a parallel effort. The second was to assess the differences in F1 and STME plume shear layer development and concentration of combustible gases. This second piece of information was to be input/supporting data for assumptions made in NLS2 base temperature scaling methodology from which the vehicle base thermal environments were being generated. The F1 calculations started at the main combustion chamber faceplate and incorporated the turbine exhaust dump/nozzle film coolant. The plume and base region calculations were made for ten thousand feet and 57 thousand feet altitude at vehicle flight velocity and in stagnant freestream. FDNS was implemented with a 14 species, 28 reaction finite rate chemistry model plus a soot burning model for the RP-1/LOX chemistry. Nozzle and plume flow fields are shown, the plume shear layer constituents are compared to a STME plume. Conclusions are made about the validity and status of the analysis and NLS2 vehicle base thermal environment definition methodology.
Photolysis Rate Coefficient Calculations in Support of SOLVE II
NASA Technical Reports Server (NTRS)
Swartz, William H.
2005-01-01
A quantitative understanding of photolysis rate coefficients (or "j-values") is essential to determining the photochemical reaction rates that define ozone loss and other crucial processes in the atmosphere. j-Values can be calculated with radiative transfer models, derived from actinic flux observations, or inferred from trace gas measurements. The primary objective of the present effort was the accurate calculation of j-values in the Arctic twilight along NASA DC-8 flight tracks during the second SAGE III Ozone Loss and Validation Experiment (SOLVE II), based in Kiruna, Sweden (68 degrees N, 20 degrees E) during January-February 2003. The JHU/APL radiative transfer model was utilized to produce a large suite of j-values for photolysis processes (over 70 reactions) relevant to the upper troposphere and lower stratosphere. The calculations take into account the actual changes in ozone abundance and apparent albedo of clouds and the Earth surface along the aircraft flight tracks as observed by in situ and remote sensing platforms (e.g., EP-TOMS). A secondary objective was to analyze solar irradiance data from NCAR s Direct beam Irradiance Atmospheric Spectrometer (DIAS) on-board the NASA DC-8 and to start the development of a flexible, multi-species spectral fitting technique for the independent retrieval of O3,O2.02, and aerosol optical properties.
Oil release from Macondo well MC252 following the Deepwater Horizon accident.
Griffiths, Stewart K
2012-05-15
Oil flow rates and cumulative discharge from the BP Macondo Prospect well in the Gulf of Mexico are calculated using a physically based model along with wellhead pressures measured at the blowout preventer (BOP) over the 86-day period following the Deepwater Horizon accident. Parameters appearing in the model are determined empirically from pressures measured during well shut-in and from pressures and flow rates measured the preceding day. This methodology rigorously accounts for ill-characterized evolution of the marine riser, installation and removal of collection caps, and any erosion at the wellhead. The calculated initial flow rate is 67,100 stock-tank barrels per day (stbd), which decays to 54,400 stbd just prior to installation of the capping stack and subsequent shut-in. The calculated cumulative discharge is 5.4 million stock-tank barrels, of which 4.6 million barrels entered the Gulf. Quantifiable uncertainties in these values are -9.3% and +7.5%, yielding a likely total discharge in the range from 4.9 to 5.8 million barrels. Minimum and maximum credible values of this discharge are 4.6 and 6.2 million barrels. Alternative calculations using the reservoir and sea-floor pressures indicate that any erosion within the BOP had little affect on cumulative discharge.
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-21
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.
Magma addition rates in continental arcs: New methods of calculation and global implications
NASA Astrophysics Data System (ADS)
Ratschbacher, B. C.; Paterson, S. R.
2017-12-01
The transport of mass, heat and geochemical constituents (elements and volatiles) from the mantle to the atmosphere occurs via magma addition to the lithosphere. Calculation of magma addition rates (MARs) in continental arcs based on exposed proportions of igneous arc rocks is complex and rarely consistently determined. Multiple factors influence MAR calculations such as crust versus mantle contributions to magmas, a change in MARs across the arc and with depths throughout the arc crustal column, `arc tempos' with periods of high and low magmatic activity, the loss of previous emplaced arc rocks by subsequent magmatism and return to the mantle, arc migration, variations in the intrusive versus extrusive additions and evolving arc widths and thicknesses during tectonism. All of these factors need to be considered when calculating MARs.This study makes a new attempt to calculate MARs in continental arcs by studying three arc sections: the Famatinian arc, Argentina, the Sierra Nevada batholith, California and the Coast Mountain batholith, Washington and British Columbia. Arcs are divided into fore-arc, main arc and back arc sections and `boxes' with a defined width, length and thickness spanning upper middle and lower crustal levels are assigned to each section. Representative exposed crustal slices for each depth are then used to calculate MARs based on outcrop proportions for each box. Geochemical data is used to infer crustal recycling percentages and total thickness of the arc. Preliminary results show a correlation between MARs, crustal thicknesses and magmatic flare-up durations. For instance, the Famatinian arc shows a strong decrease in MARs between the main arc section (9.4 km3/Ma/arc-km) and the fore-arc (0.61 km3/Ma/arc-km) and back-arc (1.52 km3/Ma/arc-km) regions and an increase in the amount of magmatism with depth.Global MARs over geologic timescales have the potential to investigate mantle melt generation rates and the volatile outgassing contribution of unerupted arc magmas to the balance of volatile element cycling from the mantle to the surface. We address this question by using exposed arc length estimates from 760 Ma until present (Cao et al. 2017, EPSL) and scale to MARs based on constrains from the detailed study of the three arc sections and a further division into magma-rich and magma-poor arcs.
The vertical slip rate of the Sertengshan piedmont fault, Inner Mongolia, China
NASA Astrophysics Data System (ADS)
Zhang, Hao; He, Zhongtai; Ma, Baoqi; Long, Jianyu; Liang, Kuan; Wang, Jinyan
2017-08-01
The vertical slip rate of a normal fault is one of the most important parameters for evaluating its level of activity. The Sertengshan piedmont fault has been studied since the 1980s, but its absolute vertical slip rate has not been determined. In this paper, we calculate the displacements of the fault by measuring the heights of piedmont terraces on the footwall and the stratigraphic depths of marker strata in the hanging wall. We then calculate the vertical slip rate of the fault based on the displacements and ages of the marker strata. We selected nine sites uniformly along the fault to study the vertical slip rates of the fault. The results show that the elevations of terraces T3 and T1 are approximately 1060 m and 1043 m, respectively. The geological boreholes in the basin adjacent to the nine study sites reveal that the elevation of the bottom of the Holocene series is between 1017 and 1035 m and that the elevation of the top of the lacustrine strata is between 925 and 1009 m. The data from the terraces and boreholes also show that the top of the lacustrine strata is approximately 65 ka old. The vertical slip rates are calculated at 0.74-1.81 mm/a since 65 ka and 0.86-2.28 mm/a since the Holocene. The slip rate is the highest along the Wujiahe segment and is lower to the west and east. Based on the findings of a previous study on the fault system along the northern margin of the Hetao graben basin, the vertical slip rates of the Daqingshan and Langshan faults are higher than those of the Sertengshan and Wulashan faults, and the strike-slip rates of these four northern Hetao graben basin faults are low. These results agree with the vertical slip components of the principal stress field on the faults. The results of our analysis indicate that the Langshankou, Wujiahe, and Wubulangkou areas and the eastern end of the Sertengshan fault are at high risk of experiencing earthquakes in the future.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less
2015-01-01
The glmS ribozyme catalyzes a self-cleavage reaction at the phosphodiester bond between residues A-1 and G1. This reaction is thought to occur by an acid–base mechanism involving the glucosamine-6-phosphate cofactor and G40 residue. Herein quantum mechanical/molecular mechanical free energy simulations and pKa calculations, as well as experimental measurements of the rate constant for self-cleavage, are utilized to elucidate the mechanism, particularly the role of G40. Our calculations suggest that an external base deprotonates either G40(N1) or possibly A-1(O2′), which would be followed by proton transfer from G40(N1) to A-1(O2′). After this initial deprotonation, A-1(O2′) starts attacking the phosphate as a hydroxyl group, which is hydrogen-bonded to deprotonated G40, concurrent with G40(N1) moving closer to the hydroxyl group and directing the in-line attack. Proton transfer from A-1(O2′) to G40 is concomitant with attack of the scissile phosphate, followed by the remainder of the cleavage reaction. A mechanism in which an external base does not participate, but rather the proton transfers from A-1(O2′) to a nonbridging oxygen during nucleophilic attack, was also considered but deemed to be less likely due to its higher effective free energy barrier. The calculated rate constant for the favored mechanism is in agreement with the experimental rate constant measured at biological Mg2+ ion concentration. According to these calculations, catalysis is optimal when G40 has an elevated pKa rather than a pKa shifted toward neutrality, although a balance among the pKa’s of A-1, G40, and the nonbridging oxygen is essential. These results have general implications, as the hammerhead, hairpin, and twister ribozymes have guanines at a similar position as G40. PMID:25526516
The Hawaiian Volcano Observatory's current approach to forecasting lava flow hazards (Invited)
NASA Astrophysics Data System (ADS)
Kauahikaua, J. P.
2013-12-01
Hawaiian Volcanoes are best known for their frequent basaltic eruptions, which typically start with fast-moving channelized `a`a flows fed by high eruptions rates. If the flows continue, they generally transition into pahoehoe flows, fed by lower eruption rates, after a few days to weeks. Kilauea Volcano's ongoing eruption illustrates this--since 1986, effusion at Kilauea has mostly produced pahoehoe. The current state of lava flow simulation is quite advanced, but the simplicity of the models mean that they are most appropriately used during the first, most vigorous, days to weeks of an eruption - during the effusion of `a`a flows. Colleagues at INGV in Catania have shown decisively that MAGFLOW simulations utilizing satellite-derived eruption rates can be effective at estimating hazards during the initial periods of an eruption crisis. However, the algorithms do not simulate the complexity of pahoehoe flows. Forecasts of lava flow hazards are the most common form of volcanic hazard assessments made in Hawai`i. Communications with emergency managers over the last decade have relied on simple steepest-descent line maps, coupled with empirical lava flow advance rate information, to portray the imminence of lava flow hazard to nearby communities. Lavasheds, calculated as watersheds, are used as a broader context for the future flow paths and to advise on the utility of diversion efforts, should they be contemplated. The key is to communicate the uncertainty of any approach used to formulate a forecast and, if the forecast uses simple tools, these communications can be fairly straightforward. The calculation of steepest-descent paths and lavasheds relies on the accuracy of the digital elevation model (DEM) used, so the choice of DEM is critical. In Hawai`i, the best choice is not the most recent but is a 1980s-vintage 10-m DEM--more recent LIDAR and satellite radar DEM are referenced to the ellipsoid and include vegetation effects. On low-slope terrain, steepest descent lines calculated on a geoid-based DEM may differ significantly from those calculated on an ellipsoid-based DEM. Good estimates of lava flow advance rates can be obtained from empirical compilations of historical advance rates of Hawaiian lava flows. In this way, rates appropriate for observed flow types (`a`a or pahoehoe, channelized or not) can be applied. Eruption rate is arguably the most important factor, while slope is also significant for low eruption rates. Eruption rate, however, remains the most difficult parameter to estimate during an active eruption. The simplicity of the HVO approach is its major benefit. How much better can lava-flow advance be forecast for all types of lava flows? Will the improvements outweigh the increased uncertainty propagated through the simulation calculations? HVO continues to improve and evaluate its lava flow forecasting tools to provide better hazard assessments to emergency personnel.
Electron-impact ionization of P-like ions forming Si-like ions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, D.-H.; Savin, D. W., E-mail: hkwon@kaeri.re.kr
2014-03-20
We have calculated electron-impact ionization (EII) for P-like systems from P to Zn{sup 15+} forming Si-like ions. The work was performed using the flexible atomic code (FAC) which is based on a distorted-wave approximation. All 3ℓ → nℓ' (n = 3-35) excitation-autoionization (EA) channels near the 3p direct ionization threshold and 2ℓ → nℓ' (n = 3-10) EA channels at the higher energies are included. Close attention has been paid to the detailed branching ratios. Our calculated total EII cross sections are compared both with previous FAC calculations, which omitted many of these EA channels, and with the available experimentalmore » results. Moreover, for Fe{sup 11+}, we find that part of the remaining discrepancies between our calculations and recent measurements can be accounted for by the inclusion of the resonant excitation double autoionization process. Lastly, at the temperatures where each ion is predicted to peak in abundances in collisional ionization equilibrium, the Maxwellian rate coefficients derived from our calculations differ by 50%-7% from the previous FAC rate coefficients, with the difference decreasing with increasing charge.« less
Fully Capitated Payment Breakeven Rate for a Mid-Size Pediatric Practice.
Farmer, Steven A; Shalowitz, Joel; George, Meaghan; McStay, Frank; Patel, Kavita; Perrin, James; Moghtaderi, Ali; McClellan, Mark
2016-08-01
Payers are implementing alternative payment models that attempt to align payment with high-value care. This study calculates the breakeven capitated payment rate for a midsize pediatric practice and explores how several different staffing scenarios affect the rate. We supplemented a literature review and data from >200 practices with interviews of practice administrators, physicians, and payers to construct an income statement for a hypothetical, independent, midsize pediatric practice in fee-for-service. The practice was transitioned to full capitation to calculate the breakeven capitated rate, holding all practice parameters constant. Panel size, overhead, physician salary, and staffing ratios were varied to assess their impact on the breakeven per-member per-month (PMPM) rate. Finally, payment rates from an existing health plan were applied to the practice. The calculated breakeven PMPM was $24.10. When an economic simulation allowed core practice parameters to vary across a broad range, 80% of practices broke even with a PMPM of $35.00. The breakeven PMPM increased by 12% ($3.00) when the staffing ratio increased by 25% and increased by 23% ($5.50) when the staffing ratio increased by 38%. The practice was viable, even with primary care medical home staffing ratios, when rates from a real-world payer were applied. Practices are more likely to succeed in capitated models if pediatricians understand how these models alter practice finances. Staffing changes that are common in patient-centered medical home models increased the breakeven capitated rate. The degree to which team-based care will increase panel size and offset increased cost is unknown. Copyright © 2016 by the American Academy of Pediatrics.
Radiative cooling efficiencies and predicted spectra of species of the Io plasma torus
NASA Technical Reports Server (NTRS)
Shemansky, D. E.
1980-01-01
Calculations of the physical condition of the Io plasma torus have been made based on the recent Voyager EUV observations. The calculations represent an assumed thin plasma collisional ionization equilibrium among the states within each species. The observations of the torus are all consistent with this condition. The major energy loss mechanism is radiative cooling in discrete transitions. Calculations of radiative cooling efficiencies of the identified species leads to an estimated energy loss rate of at least 1.5 x 10 to the 12th watts. The mean electron temperature and density of the plasma are estimated to be 100,000 K and 2100/cu cm. The estimated number densities of S III, S IV, and O III are roughly 95, 80, and 190-740/cu cm. Upper limits have been placed on a number of other species based on the first published Voyager EUV spectrum of the torus. The assumption that energy is supplied to the torus through injection of neutral particles from Io leads to the conclusion that ion loss rates are controlled by diffusion, and relative species abundances consequently are not controlled by collisional ionization equilibrium.
Assessment of Uncertainty in the Determination of Activation Energy for Polymeric Materials
NASA Technical Reports Server (NTRS)
Darby, Stephania P.; Landrum, D. Brian; Coleman, Hugh W.
1998-01-01
An assessment of the experimental uncertainty in obtaining the kinetic activation energy from thermogravimetric analysis (TGA) data is presented. A neat phenolic resin, Borden SC1O08, was heated at three heating rates to obtain weight loss vs temperature data. Activation energy was calculated by two methods: the traditional Flynn and Wall method based on the slope of log(q) versus 1/T, and a modification of this method where the ordinate and abscissa are reversed in the linear regression. The modified method produced a more accurate curve fit of the data, was more sensitive to data nonlinearity, and gave a value of activation energy 75 percent greater than the original method. An uncertainty analysis using the modified method yielded a 60 percent uncertainty in the average activation energy. Based on this result, the activation energy for a carbon-phenolic material was doubled and used to calculate the ablation rate In a typical solid rocket environment. Doubling the activation energy increased surface recession by 3 percent. Current TGA data reduction techniques that use the traditional Flynn and Wall approach to calculate activation energy should be changed to the modified method.
A project based on multi-configuration Dirac-Fock calculations for plasma spectroscopy
NASA Astrophysics Data System (ADS)
Comet, M.; Pain, J.-C.; Gilleron, F.; Piron, R.
2017-09-01
We present a project dedicated to hot plasma spectroscopy based on a Multi-Configuration Dirac-Fock (MCDF) code, initially developed by J. Bruneau. The code is briefly described and the use of the transition state method for plasma spectroscopy is detailed. Then an opacity code for local-thermodynamic-equilibrium plasmas using MCDF data, named OPAMCDF, is presented. Transition arrays for which the number of lines is too large to be handled in a Detailed Line Accounting (DLA) calculation can be modeled within the Partially Resolved Transition Array method or using the Unresolved Transition Arrays formalism in jj-coupling. An improvement of the original Partially Resolved Transition Array method is presented which gives a better agreement with DLA computations. Comparisons with some absorption and emission experimental spectra are shown. Finally, the capability of the MCDF code to compute atomic data required for collisional-radiative modeling of plasma at non local thermodynamic equilibrium is illustrated. In addition to photoexcitation, this code can be used to calculate photoionization, electron impact excitation and ionization cross-sections as well as autoionization rates in the Distorted-Wave or Close Coupling approximations. Comparisons with cross-sections and rates available in the literature are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, L.C.
The ORCENT-II digital computer program will perform calculations at valves-wide-open design conditions, maximum guaranteed rating conditions, and an approximation of part-load conditions for steam turbine cycles supplied with throttle steam characteristic of contemporary light-water reactors. Turbine performance calculations are based on a method published by the General Electric Company. Output includes all information normally shown on a turbine-cycle heat balance diagram. The program is written in FORTRAN IV for the IBM System 360 digital computers at the Oak Ridge National Laboratory.
Calculations on the rate of the ion-molecule reaction between NH3(+) and H2
NASA Technical Reports Server (NTRS)
Herbst, Eric; Defrees, D. J.; Talbi, D.; Pauzat, F.; Koch, W.
1991-01-01
The rate coefficient for the ion-molecule reaction NH3(+) + H2 yields NH4(+) + H has been calculated as a function of temperature with the use of the statistical phase space approach. The potential surface and reaction complex and transition state parameters used in the calculation have been taken from ab initio quantum chemical calculations. The calculated rate coefficient has been found to mimic the unusual temperature dependence measured in the laboratory, in which the rate coefficient decreases with decreasing temperature until 50-100 K and then increases at still lower temperatures. Quantitative agreement between experimental and theoretical rate coefficients is satisfactory given the uncertainties in the ab initio results and in the dynamics calculations. The rate coefficient for the unusual three-body process NH3(+) + H2 + He yields NH4(+) + H + He has also been calculated as a function of temperature and the result found to agree well with a previous laboratory determination.
Wang, Peng; Fang, Weining; Guo, Beiyuan
2017-04-01
This paper proposed a colored petri nets based workload evaluation model. A formal interpretation of workload was firstly introduced based on the process that reflection of petri nets components to task. A petri net based description of Multiple Resources theory was given by comprehending it from a new angle. A new application of VACP rating scales named V/A-C-P unit, and the definition of colored transitions were proposed to build a model of task process. The calculation of workload mainly has the following four steps: determine token's initial position and values; calculate the weight of directed arcs on the basis of the rules proposed; calculate workload from different transitions, and correct the influence of repetitive behaviors. Verify experiments were carried out based on Multi-Attribute Task Battery-II software. Our results show that there is a strong correlation between the model values and NASA -Task Load Index scores (r=0.9513). In addition, this method can also distinguish behavior characteristics between different people. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shin, Hyeong-Moo; Ernstoff, Alexi; Arnot, Jon A; Wetmore, Barbara A; Csiszar, Susan A; Fantke, Peter; Zhang, Xianming; McKone, Thomas E; Jolliet, Olivier; Bennett, Deborah H
2015-06-02
We present a risk-based high-throughput screening (HTS) method to identify chemicals for potential health concerns or for which additional information is needed. The method is applied to 180 organic chemicals as a case study. We first obtain information on how the chemical is used and identify relevant use scenarios (e.g., dermal application, indoor emissions). For each chemical and use scenario, exposure models are then used to calculate a chemical intake fraction, or a product intake fraction, accounting for chemical properties and the exposed population. We then combine these intake fractions with use scenario-specific estimates of chemical quantity to calculate daily intake rates (iR; mg/kg/day). These intake rates are compared to oral equivalent doses (OED; mg/kg/day), calculated from a suite of ToxCast in vitro bioactivity assays using in vitro-to-in vivo extrapolation and reverse dosimetry. Bioactivity quotients (BQs) are calculated as iR/OED to obtain estimates of potential impact associated with each relevant use scenario. Of the 180 chemicals considered, 38 had maximum iRs exceeding minimum OEDs (i.e., BQs > 1). For most of these compounds, exposures are associated with direct intake, food/oral contact, or dermal exposure. The method provides high-throughput estimates of exposure and important input for decision makers to identify chemicals of concern for further evaluation with additional information or more refined models.
39 CFR 3010.26 - Calculation of unused rate adjustment authority.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Calculation of unused rate adjustment authority. 3010.26 Section 3010.26 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL REGULATION OF RATES FOR MARKET DOMINANT PRODUCTS Rules for Applying the Price Cap § 3010.26 Calculation of unused rate adjustment...
The estimation of absorbed dose rates for non-human biota : an extended inter-comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batlle, J. V. I.; Beaugelin-Seiller, K.; Beresford, N. A.
An exercise to compare 10 approaches for the calculation of unweighted whole-body absorbed dose rates was conducted for 74 radionuclides and five of the ICRP's Reference Animals and Plants, or RAPs (duck, frog, flatfish egg, rat and elongated earthworm), selected for this exercise to cover a range of body sizes, dimensions and exposure scenarios. Results were analysed using a non-parametric method requiring no specific hypotheses about the statistical distribution of data. The obtained unweighted absorbed dose rates for internal exposure compare well between the different approaches, with 70% of the results falling within a range of variation of {+-}20%. Themore » variation is greater for external exposure, although 90% of the estimates are within an order of magnitude of one another. There are some discernible patterns where specific models over- or under-predicted. These are explained based on the methodological differences including number of daughter products included in the calculation of dose rate for a parent nuclide; source-target geometry; databases for discrete energy and yield of radionuclides; rounding errors in integration algorithms; and intrinsic differences in calculation methods. For certain radionuclides, these factors combine to generate systematic variations between approaches. Overall, the technique chosen to interpret the data enabled methodological differences in dosimetry calculations to be quantified and compared, allowing the identification of common issues between different approaches and providing greater assurance on the fundamental dose conversion coefficient approaches used in available models for assessing radiological effects to biota.« less
Shielding calculations for industrial 5/7.5MeV electron accelerators using the MCNP Monte Carlo Code
NASA Astrophysics Data System (ADS)
Peri, Eyal; Orion, Itzhak
2017-09-01
High energy X-rays from accelerators are used to irradiate food ingredients to prevent growth and development of unwanted biological organisms in food, and by that extend the shelf life of the products. The production of X-rays is done by accelerating 5 MeV electrons and bombarding them into a heavy target (high Z). Since 2004, the FDA has approved using 7.5 MeV energy, providing higher production rates with lower treatments costs. In this study we calculated all the essential data needed for a straightforward concrete shielding design of typical food accelerator rooms. The following evaluation is done using the MCNP Monte Carlo code system: (1) Angular dependence (0-180°) of photon dose rate for 5 MeV and 7.5 MeV electron beams bombarding iron, aluminum, gold, tantalum, and tungsten targets. (2) Angular dependence (0-180°) spectral distribution simulations of bremsstrahlung for gold, tantalum, and tungsten bombarded by 5 MeV and 7.5 MeV electron beams. (3) Concrete attenuation calculations in several photon emission angles for the 5 MeV and 7.5 MeV electron beams bombarding a tantalum target. Based on the simulation, we calculated the expected increase in dose rate for facilities intending to increase the energy from 5 MeV to 7.5 MeV, and the concrete width needed to be added in order to keep the existing dose rate unchanged.
Coldman, Andrew; Phillips, Norm
2013-07-09
There has been growing interest in the overdiagnosis of breast cancer as a result of mammography screening. We report incidence rates in British Columbia before and after the initiation of population screening and provide estimates of overdiagnosis. We obtained the numbers of breast cancer diagnoses from the BC Cancer Registry and screening histories from the Screening Mammography Program of BC for women aged 30-89 years between 1970 and 2009. We calculated age-specific rates of invasive breast cancer and ductal carcinoma in situ. We compared these rates by age, calendar period and screening participation. We obtained 2 estimates of overdiagnosis from cumulative cancer rates among women between the ages of 40 and 89 years: the first estimate compared participants with nonparticipants; the second estimate compared observed and predicted population rates. We calculated participation-based estimates of overdiagnosis to be 5.4% for invasive disease alone and 17.3% when ductal carcinoma in situ was included. The corresponding population-based estimates were -0.7% and 6.7%. Participants had higher rates of invasive cancer and ductal carcinoma in situ than nonparticipants but lower rates after screening stopped. Population incidence rates for invasive cancer increased after 1980; by 2009, they had returned to levels similar to those of the 1970s among women under 60 years of age but remained elevated among women 60-79 years old. Rates of ductal carcinoma in situ increased in all age groups. The extent of overdiagnosis of invasive cancer in our study population was modest and primarily occurred among women over the age of 60 years. However, overdiagnosis of ductal carcinoma in situ was elevated for all age groups. The estimation of overdiagnosis from observational data is complex and subject to many influences. The use of mammography screening in older women has an increased risk of overdiagnosis, which should be considered in screening decisions.
NASA Astrophysics Data System (ADS)
Yang, Xin; Zhong, Shiquan; Sun, Han; Tan, Zongkun; Li, Zheng; Ding, Meihua
Based on analyzing of the physical characteristics of cloud and importance of cloud in agricultural production and national economy, cloud is a very important climatic resources such as temperature, precipitation and solar radiation. Cloud plays a very important role in agricultural climate division .This paper analyzes methods of cloud detection based on MODIS data in China and Abroad . The results suggest that Quanjun He method is suitable to detect cloud in Guangxi. State chart of cloud cover in Guangxi is imaged by using Quanjun He method .We find out the approach of calculating cloud covered rate by using the frequency spectrum analysis. At last, the Guangxi is obtained. Taking Rongxian County Guangxi as an example, this article analyze the preliminary application of cloud covered rate in distribution of Rong Shaddock pomelo . Analysis results indicate that cloud covered rate is closely related to quality of Rong Shaddock pomelo.
Earthquake insurance pricing: a risk-based approach.
Lin, Jeng-Hsiang
2018-04-01
Flat earthquake premiums are 'uniformly' set for a variety of buildings in many countries, neglecting the fact that the risk of damage to buildings by earthquakes is based on a wide range of factors. How these factors influence the insurance premiums is worth being studied further. Proposed herein is a risk-based approach to estimate the earthquake insurance rates of buildings. Examples of application of the approach to buildings located in Taipei city of Taiwan were examined. Then, the earthquake insurance rates for the buildings investigated were calculated and tabulated. To fulfil insurance rating, the buildings were classified into 15 model building types according to their construction materials and building height. Seismic design levels were also considered in insurance rating in response to the effect of seismic zone and construction years of buildings. This paper may be of interest to insurers, actuaries, and private and public sectors of insurance. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.
Astrophysical reaction rate for α(αn,γ)9Be by photodisintegration
NASA Astrophysics Data System (ADS)
Sumiyoshi, K.; Utsunomiya, H.; Goko, S.; Kajino, T.
2002-10-01
We study the astrophysical reaction rate for the formation of 9Be through the three body reaction α(αn,γ). This reaction is one of the key reactions which could bridge the mass gap at A=8 nuclear systems to produce intermediate-to-heavy mass elements in alpha- and neutron-rich environments such as r-process nucleosynthesis in supernova explosions, s-process nucleosynthesis in asymptotic giant branch (AGB) stars, and primordial nucleosynthesis in baryon inhomogeneous cosmological models. To calculate the thermonuclear reaction rate in a wide range of temperatures, we numerically integrate the thermal average of cross sections assuming a two-steps formation through a metastable 8Be, α+α⇌8Be(n,γ)9Be. Off-resonant and on-resonant contributions from the ground state in 8Be are taken into account. As input cross section, we adopt the latest experimental data by photodisintegration of 9Be with laser-electron photon beams, which covers all relevant resonances in 9Be. Experimental data near the neutron threshold are added with γ-ray flux corrections and a new least-squares analysis is made to deduce resonance parameters in the Breit-Wigner formulation. Based on the photodisintegration cross section, we provide the reaction rate for α(αn,γ)9Be in the temperature range from T9=10-3 to T9=101 (T9 is the temperature in units of 109 K) both in the tabular form and in the analytical form for potential usage in nuclear reaction network calculations. The calculated reaction rate is compared with the reaction rates of the CF88 and the NACRE compilations. The CF88 rate, which is based on the photoneutron cross section for the 1/2+ state in 9Be by Berman et al., is valid at T9>0.028 due to lack of the off-resonant contribution. The CF88 rate differs from the present rate by a factor of two in a temperature range T9⩾0.1. The NACRE rate, which adopted different sources of experimental information on resonance states in 9Be, is 4-12 times larger than the present rate at T9⩽0.028, but is consistent with the present rate to within ±20% at T9⩾0.1.
Implementation of the common phrase index method on the phrase query for information retrieval
NASA Astrophysics Data System (ADS)
Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah
2017-08-01
As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.
Atmospheric turbulence affects wind turbine nacelle transferfunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew
Despite their potential as a valuable source of individual turbine power performance and turbine array energy production optimization information, nacelle-mounted anemometers have often been neglected because complex flows around the blades and nacelle interfere with their measurements. This work quantitatively explores the accuracy of and potential corrections to nacelle anemometer measurements to determine the degree to which they may be useful when corrected for these complex flows, particularly for calculating annual energy production (AEP) in the absence of other meteorological data. Using upwind meteorological tower measurements along with nacelle-based measurements from a General Electric (GE) 1.5sle model, we calculate empiricalmore » nacelle transfer functions (NTFs) and explore how they are impacted by different atmospheric and turbulence parameters. This work provides guidelines for the use of NTFs for deriving useful wind measurements from nacelle-mounted anemometers. Corrections to the nacelle anemometer wind speed measurements can be made with NTFs and used to calculate an AEP that comes within 1 % of an AEP calculated with upwind measurements. We also calculate unique NTFs for different atmospheric conditions defined by temperature stratification as well as turbulence intensity, turbulence kinetic energy, and wind shear. During periods of low stability as defined by the Bulk Richardson number ( RB), the nacelle-mounted anemometer underestimates the upwind wind speed more than during periods of high stability at some wind speed bins below rated speed, leading to a more steep NTF during periods of low stability. Similarly, during periods of high turbulence, the nacelle-mounted anemometer underestimates the upwind wind speed more than during periods of low turbulence at most wind bins between cut-in and rated wind speed. Based on these results, we suggest different NTFs be calculated for different regimes of atmospheric stability and turbulence for power performance validation purposes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew
Using detailed upwind and nacelle-based measurements from a General Electric (GE) 1.5sle model with a 77 m rotor diameter, we calculate power curves and annual energy production (AEP) and explore their sensitivity to different atmospheric parameters to provide guidelines for the use of stability and turbulence filters in segregating power curves. The wind measurements upwind of the turbine include anemometers mounted on a 135 m meteorological tower as well as profiles from a lidar. We calculate power curves for different regimes based on turbulence parameters such as turbulence intensity (TI) as well as atmospheric stability parameters such as the bulk Richardson number ( Rmore » B). We also calculate AEP with and without these atmospheric filters and highlight differences between the results of these calculations. The power curves for different TI regimes reveal that increased TI undermines power production at wind speeds near rated, but TI increases power production at lower wind speeds at this site, the US Department of Energy (DOE) National Wind Technology Center (NWTC). Similarly, power curves for different R B regimes reveal that periods of stable conditions produce more power at wind speeds near rated and periods of unstable conditions produce more power at lower wind speeds. AEP results suggest that calculations without filtering for these atmospheric regimes may overestimate the AEP. Because of statistically significant differences between power curves and AEP calculated with these turbulence and stability filters for this turbine at this site, we suggest implementing an additional step in analyzing power performance data to incorporate effects of atmospheric stability and turbulence across the rotor disk.« less
St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew; ...
2016-11-01
Using detailed upwind and nacelle-based measurements from a General Electric (GE) 1.5sle model with a 77 m rotor diameter, we calculate power curves and annual energy production (AEP) and explore their sensitivity to different atmospheric parameters to provide guidelines for the use of stability and turbulence filters in segregating power curves. The wind measurements upwind of the turbine include anemometers mounted on a 135 m meteorological tower as well as profiles from a lidar. We calculate power curves for different regimes based on turbulence parameters such as turbulence intensity (TI) as well as atmospheric stability parameters such as the bulk Richardson number ( Rmore » B). We also calculate AEP with and without these atmospheric filters and highlight differences between the results of these calculations. The power curves for different TI regimes reveal that increased TI undermines power production at wind speeds near rated, but TI increases power production at lower wind speeds at this site, the US Department of Energy (DOE) National Wind Technology Center (NWTC). Similarly, power curves for different R B regimes reveal that periods of stable conditions produce more power at wind speeds near rated and periods of unstable conditions produce more power at lower wind speeds. AEP results suggest that calculations without filtering for these atmospheric regimes may overestimate the AEP. Because of statistically significant differences between power curves and AEP calculated with these turbulence and stability filters for this turbine at this site, we suggest implementing an additional step in analyzing power performance data to incorporate effects of atmospheric stability and turbulence across the rotor disk.« less
Atmospheric turbulence affects wind turbine nacelle transferfunctions
St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew; ...
2017-06-02
Despite their potential as a valuable source of individual turbine power performance and turbine array energy production optimization information, nacelle-mounted anemometers have often been neglected because complex flows around the blades and nacelle interfere with their measurements. This work quantitatively explores the accuracy of and potential corrections to nacelle anemometer measurements to determine the degree to which they may be useful when corrected for these complex flows, particularly for calculating annual energy production (AEP) in the absence of other meteorological data. Using upwind meteorological tower measurements along with nacelle-based measurements from a General Electric (GE) 1.5sle model, we calculate empiricalmore » nacelle transfer functions (NTFs) and explore how they are impacted by different atmospheric and turbulence parameters. This work provides guidelines for the use of NTFs for deriving useful wind measurements from nacelle-mounted anemometers. Corrections to the nacelle anemometer wind speed measurements can be made with NTFs and used to calculate an AEP that comes within 1 % of an AEP calculated with upwind measurements. We also calculate unique NTFs for different atmospheric conditions defined by temperature stratification as well as turbulence intensity, turbulence kinetic energy, and wind shear. During periods of low stability as defined by the Bulk Richardson number ( RB), the nacelle-mounted anemometer underestimates the upwind wind speed more than during periods of high stability at some wind speed bins below rated speed, leading to a more steep NTF during periods of low stability. Similarly, during periods of high turbulence, the nacelle-mounted anemometer underestimates the upwind wind speed more than during periods of low turbulence at most wind bins between cut-in and rated wind speed. Based on these results, we suggest different NTFs be calculated for different regimes of atmospheric stability and turbulence for power performance validation purposes.« less
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
Zheng, Jianqiu; Thornton, Peter; Painter, Scott; Gu, Baohua; Wullschleger, Stan; Graham, David
2018-06-13
This anaerobic carbon decomposition model is developed with explicit representation of fermentation, methanogenesis and iron reduction by combining three well-known modeling approaches developed in different disciplines. A pool-based model to represent upstream carbon transformations and replenishment of DOC pool, a thermodynamically-based model to calculate rate kinetics and biomass growth for methanogenesis and Fe(III) reduction, and a humic ion-binding model for aqueous phase speciation and pH calculation are implemented into the open source geochemical model PHREEQC (V3.0). Installation of PHREEQC is required to run this model.
NASA Astrophysics Data System (ADS)
Guo, Xinwei; Qu, Zexing; Gao, Jiali
2018-01-01
The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.
Effect of Different Gums on Rheological Properties of Slurry
NASA Astrophysics Data System (ADS)
Weikey, Yogita; Sinha, S. L.; Dewangan, S. K.
2018-02-01
This paper presents the effect of different natural gums on water-bentonite slurry, which is used as based fluid in water based drilling fluid. The gums used are Babul gum (Acacia nilotica), Dhawda gum (Anogeissus latifolia), Katira gum (Cochlospermum religiosum) and Semal gum (Bombax ceiba). For present investigation, samples have been prepared by varying concentration of gums. The variation of shear stress and shear rate has been plotted and on the basis of this behaviour of fluids has been explained. The value of k and n are calculated by using Power law. R 2 values are also calculated to support the choice of gum selection.
NASA Astrophysics Data System (ADS)
Potter, William J.
2017-02-01
We calculate the severe radiative energy losses which occur at the base of black hole jets using a relativistic fluid jet model, including in situ acceleration of non-thermal leptons by magnetic reconnection. Our results demonstrate that including a self-consistent treatment of radiative energy losses is necessary to perform accurate magnetohydrodynamic simulations of powerful jets and that jet spectra calculated via post-processing are liable to vastly overestimate the amount of non-thermal emission. If no more than 95 per cent of the initial total jet power is radiated away by the plasma travels as it travels along the length of the jet, we can place a lower bound on the magnetization of the jet plasma at the base of the jet. For typical powerful jets, we find that the plasma at the jet base is required to be highly magnetized, with at least 10 000 times more energy contained in magnetic fields than in non-thermal leptons. Using a simple power-law model of magnetic reconnection, motivated by simulations of collisionless reconnection, we determine the allowed range of the large-scale average reconnection rate along the jet, by restricting the total radiative energy losses incurred and the distance at which the jet first comes into equipartition. We calculate analytic expressions for the cumulative radiative energy losses due to synchrotron and inverse-Compton emission along jets, and derive analytic formulae for the constraint on the initial magnetization.
Melakeberhan, H; Ferris, H
1988-10-01
Food (energy) consumption rates ofMeloidogyne incognita were calculated on Vitis vinifera cv. French Colombard (highly susceptible) and cv. Thompson Seedless (moderately resistant). One-month-old grape seedlings in styrofoam cups were inoculated with 2,000 or 8,000 M. incognita second-stage juveniles (J2) and maintained at 17.5 degree days (DD - base 10 C)/day until maximum adult female growth and (or) the end of oviposition. At 70 DD intervals, nematode fresh biomass was calculated on the basis of volumes of 15-20 nematodes per plant obtained with a digitizer and computer algorithm. Egg production was measured at 50-80 DD intervals by weighing 7-10 egg masses and counting the number of eggs. Nematode growth and food (energy) consumption rates were calculated up to 1,000 DD based on biomass increase, respiratory requirements, and an assumption of 60 % assimilation efficiency. The growth rate of a single root-knot nematode, excluding egg production, was similar in both cultivars and had a logistic form. The maximum fresh weight of a mature female nematode was ca. 29-32 mug. The total biomass increase, including egg production, also had a logistic form. Maximum biomass (mature adult female and egg mass) was 211 mug on French Colombard and 127 mug on Thompson Seedless. The calculated total cost to the host for the development of a single J2 from root penetration to the end of oviposition for body growth and total biomass was 0.535 and 0.486 calories with a total energy demand of 1.176 and 0.834 calories in French Colombard and Thompson Seedless, respectively.
a New ENDF/B-VII.0 Based Multigroup Cross-Section Library for Reactor Dosimetry
NASA Astrophysics Data System (ADS)
Alpan, F. A.; Anderson, S. L.
2009-08-01
The latest of the ENDF/B libraries, ENDF/B-VII.0 was released in December 2006. In this paper, the ENDF/B-VII.O evaluations were used in generating a new coupled neutron/gamma multigroup library having the same group structure of VITAMIN-B6, i.e., the 199-neutron, 42-gamma group library. The new library was generated utilizing NJOY99.259 for pre-processing and the AMPX modules for post-processing of cross sections. An ENDF/B-VI.3 based VITAMIN-B6-like library was also generated. The fine-group libraries and the ENDF/B-VI.3 based 47-neutron, 20-gamma group BUGLE-96 library were used with the discrete ordinates code DORT to obtain a three-dimensional synthesized flux distribution from r, r-θ, and r-z models for a standard Westinghouse 3-loop design reactor. Reaction rates were calculated for ex-vessel neutron dosimetry containing 63Cu(n,α)60Co, 46Ti(n,p)46Sc, 54Fe(n,P)54Mn, 58Ni(n,P)58Co, 238U(n,f)137Cs, 237Np(n,f)137Cs, and 59Co(n,γ)60Co (bare and cadmium covered) reactions. Results were compared to measurements. In comparing the 199-neutron, 42-gamma group ENDF/B-VI.3 and ENDF/B-VII.O libraries, it was observed that the ENDF/B-VI.3 based library results were in better agreement with measurements. There is a maximum difference of 7% (for the 63Cu(n,α)60Co reaction rate calculation) between ENDF/B-VI.3 and ENDF/B-VII.O. Differences between ENDF/B-VI.3 and ENDF/B-VII.O libraries are due to 16O, 1H, 90Zr, 91Zr, 92Zr, 238U, and 239Pu evaluations. Both ENDF/B-VI.3 and ENDF/B-VII.O library calculated reaction rates are within 20% of measurement and meet the criterion specified in the U. S. Nuclear Regulatory Commission Regulatory Guide 1.190, "Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence."
Wu, Renhua; Xiao, Gang; Zhou, Iris Yuwen; Ran, Chongzhao; Sun, Phillip Zhe
2015-03-01
Chemical exchange saturation transfer (CEST) MRI is sensitive to labile proton concentration and exchange rate, thus allowing measurement of dilute CEST agent and microenvironmental properties. However, CEST measurement depends not only on the CEST agent properties but also on the experimental conditions. Quantitative CEST (qCEST) analysis has been proposed to address the limitation of the commonly used simplistic CEST-weighted calculation. Recent research has shown that the concomitant direct RF saturation (spillover) effect can be corrected using an inverse CEST ratio calculation. We postulated that a simplified qCEST analysis is feasible with omega plot analysis of the inverse CEST asymmetry calculation. Specifically, simulations showed that the numerically derived labile proton ratio and exchange rate were in good agreement with input values. In addition, the qCEST analysis was confirmed experimentally in a phantom with concurrent variation in CEST agent concentration and pH. Also, we demonstrated that the derived labile proton ratio increased linearly with creatine concentration (P < 0.01) while the pH-dependent exchange rate followed a dominantly base-catalyzed exchange relationship (P < 0.01). In summary, our study verified that a simplified qCEST analysis can simultaneously determine labile proton ratio and exchange rate in a relatively complex in vitro CEST system. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Shaker, Ali Mohamed; Nassr, Lobna Abdel-Mohsen Ebaid; Adam, Mohamed Shaker Saied; Mohamed, Ibrahim Mohamed Abdelhalim
2015-05-01
Kinetic study of acid hydrolysis of some hydrophilic Fe(II) Schiff base amino acid complexes with antibacterial properties was performed using spectrophotometry. The Schiff base ligands were derived from sodium 2-hydroxybenzaldehyde-5-sulfonate and glycine, L-alanine, L-leucine, L-isoleucine, DL-methionine, DL-serine, or L-phenylalanine. The reaction was studied in aqueous media under conditions of pseudo-first order kinetics. Moreover, the acid hydrolysis was studied at different temperatures and the activation parameters were calculated. The general rate equation was suggested as follows: rate = k obs [Complex], where k obs = k 2 [H+]. The evaluated rate constants and activation parameters are consistent with the hydrophilicity of the investigated complexes.
New Approach for Investigating Reaction Dynamics and Rates with Ab Initio Calculations.
Fleming, Kelly L; Tiwary, Pratyush; Pfaendtner, Jim
2016-01-21
Herein, we demonstrate a convenient approach to systematically investigate chemical reaction dynamics using the metadynamics (MetaD) family of enhanced sampling methods. Using a symmetric SN2 reaction as a model system, we applied infrequent metadynamics, a theoretical framework based on acceleration factors, to quantitatively estimate the rate of reaction from biased and unbiased simulations. A systematic study of the algorithm and its application to chemical reactions was performed by sampling over 5000 independent reaction events. Additionally, we quantitatively reweighed exhaustive free-energy calculations to obtain the reaction potential-energy surface and showed that infrequent metadynamics works to effectively determine Arrhenius-like activation energies. Exact agreement with unbiased high-temperature kinetics is also shown. The feasibility of using the approach on actual ab initio molecular dynamics calculations is then presented by using Car-Parrinello MD+MetaD to sample the same reaction using only 10-20 calculations of the rare event. Owing to the ease of use and comparatively low-cost of computation, the approach has extensive potential applications for catalysis, combustion, pyrolysis, and enzymology.
Systematic Analysis of Icotinib Treatment for Patients with Non-Small Cell Lung Cancer.
Shi, Bing; Zhang, Xiu-Bing; Xu, Jian; Huang, Xin-En
2015-01-01
This analysis was conducted to evaluate the efficacy and safety of icotinib based regimens in treating patients with non-small cell lung cancer (NSCLC). Clinical studies evaluating the efficacy and safety of icotinib-based regimens with regard to response and safety for patients with NSCLC were identified using a predefined search strategy. Pooled response rates of treatment were calculated. With icotinib-based regimens, 7 clinical studies which including 5,985 Chinese patients with NSCLC were considered eligible for inclusion. The pooled analysis suggested that, in all patients, the positive reponse rate was 30.1% (1,803/5,985) with icotinib-based regimens. Mild skin itching, rashes and diarrhea were the main side effects. No grade III or IV renal or liver toxicity was observed. No treatment-related death occurred in patients treated with icotinib-based regimens. This evidence based analysis suggests that icotinib based regimens are associated with mild response rate and acceptable toxicity for treating Chinese patients with NSCLC.
High-resolution dating of deep-sea clays using Sr isotopes in fossil fish teeth
NASA Astrophysics Data System (ADS)
Ingram, B. Lynn
1995-09-01
Strontium isotopic compostitions of ichthyoliths (microscopic fish remains) in deep-sea clays recovered from the North Pacific Ocean (ODP holes 885A, 886B, and 886C) are used to provide stratigraphic age control within these otherwise undatable sediments. Age control within the deep-sea clays is crucial for determining changes in sedimentation rates, and for calculating fluxes of chemical and mineral components to the sediments. The Sr isotopic ages are in excellent agreement with independent age datums from above (diatom ooze), below (basalt basement) and within (Cretaceous-Tertiary boundary) the clay deposit. The 87Sr/ 86Sr ratios of fish teeth from the top of the pelagic clay unit (0.708989), indicate an Late Miocene age (5.8 Ma), as do radiolarian and diatom biostratigraphic ages in the overlying diatom ooze. The 87Sr/ 86Sr ratio (0.707887) is consistent with a Cretaceous-Tertiary boundary age, as identified by anomalously high iridium, shocked quartz, and sperules in Hole 886C. The 87Sr/ 86Sr ratios of pretreated fish teeth from the base of the clay unit are similar to Late Cretaceous seawater (0.707779-0.707519), consistent with radiometric ages from the underlying basalt of 81 Ma. Calculation of sedimentation rates based on Sr isotopic ages from Hole 886C indicate an average sedimentation rate of 17.7 m/Myr in Unit II (diatom ooze), 0.55 m/Myr in Unit IIIa (pelagic clay), and 0.68 m/Myr in Unit IIIb (distal hydrothermal precipitates). The Sr isotopic ages indicate a period of greatly reduced sedimentation (or possible hiatus) between about 35 and 65 Ma (Eocene-Paleocene), with a linear sedimentation rate of only 0.04 m/Myr The calculated sedimentation rates are generally inversely proportional to cobalt accumulation rates and ichthyolith abundances. However, discrepancies between Sr isotope ages and cobalt accumulation ages of 10-15 Myr are evident, particularly in the middle of the clay unit IIIa (Oligocene-Paleocene).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballester, Facundo, E-mail: Facundo.Ballester@uv.es; Carlsson Tedgren, Åsa; Granero, Domingo
Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual watermore » phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then compared against dose calculated using TG-43 and MBDCA methods. Results: TG-43 and PSS datasets were generated for the generic source, the PSS data for use with the ACE algorithm. The dose-rate constant values obtained from seven MC simulations, performed independently using different codes, were in excellent agreement, yielding an average of 1.1109 ± 0.0004 cGy/(h U) (k = 1, Type A uncertainty). MC calculated dose-rate distributions for the two plans were also found to be in excellent agreement, with differences within type A uncertainties. Differences between commercial MBDCA and MC results were test, position, and calculation parameter dependent. On average, however, these differences were within 1% for ACUROS and 2% for ACE at clinically relevant distances. Conclusions: A hypothetical, generic HDR {sup 192}Ir source was designed and implemented in two commercially available TPSs employing different MBDCAs. Reference dose distributions for this source were benchmarked and used for the evaluation of MBDCA calculations employing a virtual, cubic water phantom in the form of a CT DICOM image series. The implementation of a generic source of identical design in all TPSs using MBDCAs is an important step toward supporting univocal commissioning procedures and direct comparisons between TPSs.« less
Alecu, I M; Truhlar, Donald G
2011-12-29
Multistructural canonical variational-transition-state theory with multidimensional tunneling (MS-CVT/MT) is employed to calculate thermal rate constants for the abstraction of hydrogen atoms from both positions of methanol by the hydroperoxyl and methyl radicals over the temperature range 100-3000 K. The M08-HX hybrid meta-generalized gradient approximation density functional and M08-HX with specific reaction parameters, both with the maug-cc-pVTZ basis set, were validated in part 1 of this study (Alecu, I. M.; Truhlar, D. G. J. Phys. Chem. A2011, 115, 2811) against highly accurate CCSDT(2)(Q)/CBS calculations for the energetics of these reactions, and they are used here to compute the properties of all stationary points and the energies, gradients, and Hessians of nonstationary points along each considered reaction path. The internal rotations in some of the transition states are found to be highly anharmonic and strongly coupled to each other, and they generate multiple structures (conformations) whose contributions are included in the partition function. It is shown that the previous estimates for these rate constants used to build kinetic models for the combustion of methanol, some of which were based on transition state theory calculations with one-dimensional tunneling corrections and harmonic-oscillator approximations or separable one-dimensional hindered rotor treatments of torsions, are appreciably different than the ones presently calculated using MS-CVT/MT. The rate constants obtained from the best MS-CVT/MT calculations carried out in this study, in which the important effects of corner cutting due to small and large reaction path curvature are captured via a microcanonical optimized multidimensional tunneling (μOMT) treatment, are recommended for future refinement of the kinetic model for methanol combustion. © 2011 American Chemical Society
Method and system for gas flow mitigation of molecular contamination of optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado, Gildardo; Johnson, Terry; Arienti, Marco
A computer-implemented method for determining an optimized purge gas flow in a semi-conductor inspection metrology or lithography apparatus, comprising receiving a permissible contaminant mole fraction, a contaminant outgassing flow rate associated with a contaminant, a contaminant mass diffusivity, an outgassing surface length, a pressure, a temperature, a channel height, and a molecular weight of a purge gas, calculating a flow factor based on the permissible contaminant mole fraction, the contaminant outgassing flow rate, the channel height, and the outgassing surface length, comparing the flow factor to a predefined maximum flow factor value, calculating a minimum purge gas velocity and amore » purge gas mass flow rate from the flow factor, the contaminant mass diffusivity, the pressure, the temperature, and the molecular weight of the purge gas, and introducing the purge gas into the semi-conductor inspection metrology or lithography apparatus with the minimum purge gas velocity and the purge gas flow rate.« less
Implementation of a vibrationally linked chemical reaction model for DSMC
NASA Technical Reports Server (NTRS)
Carlson, A. B.; Bird, Graeme A.
1994-01-01
A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.
Progressive addition lenses--measurements and ratings.
Sheedy, Jim; Hardy, Raymond F; Hayes, John R
2006-01-01
This study is a followup to a previous study in which the optics of several progressive addition lens (PALs) designs were measured and analyzed. The objective was to provide information about various PAL designs to enable eye care practitioners to select designs based on the particular viewing requirements of the patient. The optical properties of 12 lenses of the same power for each of 23 different PAL designs were measured with a Rotlex Class Plus lens analyzer. Lenses were ordered through optical laboratories and specified to be plano with a +2.00 diopters add. Measurements were normalized to plano at the manufacturer-assigned location for the distance power to eliminate laboratory tolerance errors. The magnitude of unwanted astigmatism and the widths and areas of the distance, intermediate, and near viewing zones were calculated from the measured data according to the same criteria used in a previous study. The optical characteristics of the different PAL designs were significantly different from one another. The differences were significant in terms of the sizes and widths of the viewing zones, the amount of unwanted astigmatism, and the minimum fitting height. Ratings of the distance, intermediate, and near viewing areas were calculated for each PAL design based on the widths and sizes of those zones. Ratings for unwanted astigmatism and recommended minimum fitting heights were also determined. Ratings based on combinations of viewing zone ratings are also reported. The ratings are intended to be used to select a PAL design that matches the particular visual needs of the patient and to evaluate the success and performance of currently worn PALs. Reasoning and task analyses suggest that these differences can be used to select a PAL design to meet the individual visual needs of the patient; clinical trials studies are required to test this hypothesis.
Carpio, B; Brown, B
1993-01-01
The undergraduate nursing degree program (B.Sc.N.) at McMaster University School of Nursing uses small groups, and is learner-centered and problem-based. A study was conducted during the 1991 admissions cycle to determine the initial reliability and validity of the semi-structured personal interview which constitutes the final component of candidate selection for this program. During the interview, three-member teams assess applicant suitability to the program based on six dimensions: applicant motivation, awareness of the program, problem-solving abilities, ability to relate to others, self-appraisal skills, and career goals. Each interviewer assigns the applicant a global rating using a seven-point scale. For the purposes of this study four interviewer teams were randomly selected from the pool of 31 teams to interview four simulated (preprogrammed) applicants. Using two-factor repeated-measures ANOVA to analyze interview ratings, inter-rater and inter-team intraclass correlation coefficients (ICC) were calculated. Inter-team reliability ranged from .64 to .97 for the individual dimensions, and .66 to .89 on global ratings. Inter-rater ICC for the six dimensions ranged from .81 to .99, and .96 to .99 for the global ratings. The item-to-total correlation coefficients between individual dimensions and global ratings ranged from .8 to 1.0. Pearson correlations between items ranged from .77 to 1.0. The ICC were then calculated for the interview scores of 108 actual applicants to the program. Inter-rater reliability based on global ratings was .79 for the single (1 rater) observation, and .91 for the multiple (3 rater) observation. These findings support the continued use of the interview as a reliable instrument with face validity. Studies of predictive validity will be undertaken.
Crude and intrinsic birth rates for Asian countries.
Rele, J R
1978-01-01
An attempt to estimate birth rates for Asian countries. The main sources of information in developing countries has been census age-sex distribution, although inaccuracies in the basic data have made it difficult to reach a high degree of accuracy. Different methods bring widely varying results. The methodology presented here is based on the use of the conventional child-woman ratio from the census age-sex distribution, with a rough estimate of the expectation of life at birth. From the established relationships between child-woman ratio and the intrinsic birth rate of the nature y = a + bx + cx(2) at each level of life expectation, the intrinsic birth rate is first computed using coefficients already computed. The crude birth rate is obtained using the adjustment based on the census age-sex distribution. An advantage to this methodology is that the intrinsic birth rate, normally an involved computation, can be obtained relatively easily as a biproduct of the crude birth rates and the bases for the calculations for each of 33 Asian countries, in some cases over several time periods.
High-temperature oxidation kinetics of sponge-based E110 cladding alloy
Yan, Yong; Garrison, Benton E.; Howell, Mike; ...
2017-11-03
Two-sided oxidation experiments were recently conducted at 900°C–1200 °C in flowing steam with samples of sponge-based Zr-1Nb alloy E110. Although the old electrolytic E110 tubing exhibited a high degree of susceptibility to nodular corrosion and experienced breakaway oxidation rates in a relatively short time, the new sponge-based E110 demonstrated steam oxidation behavior comparable to Zircaloy-4. Sample weight gain and oxide layer thickness measurements were performed on oxidized E110 specimens and compared to oxygen pickup and oxide layer thickness calculations using the Cathcart-Pawel correlation. Our study shows that the sponge-based E110 follows the parabolic law at temperatures above 1015 °C. Atmore » or below 1015 °C, the oxidation rate was very low when compared to Zircaloy-4 and can be represented by a cubic expression. No breakaway oxidation was observed at 1000 °C for oxidation times up to 10,000 s. Arrhenius expressions are given to describe the parabolic rate constants at temperatures above 1015 °C and cubic rate constants are provided for temperatures below 1015 °C. The weight gains calculated by our equations are in excellent agreement with the measured sample weight gains at all test temperatures. In addition to the as-fabricated E110 cladding sample, prehydrided E110 cladding with hydrogen concentrations in the 100–150 wppm range was also investigated. The effect of hydrogen content on sponge-based E110 oxidation kinetics was minimal. No significant difference was found between as-fabricated and hydrided samples with regard to oxygen pickup and oxide layer thickness for hydrogen contents below 150 wppm.« less
High-temperature oxidation kinetics of sponge-based E110 cladding alloy
NASA Astrophysics Data System (ADS)
Yan, Yong; Garrison, Benton E.; Howell, Mike; Bell, Gary L.
2018-02-01
Two-sided oxidation experiments were recently conducted at 900°C-1200 °C in flowing steam with samples of sponge-based Zr-1Nb alloy E110. Although the old electrolytic E110 tubing exhibited a high degree of susceptibility to nodular corrosion and experienced breakaway oxidation rates in a relatively short time, the new sponge-based E110 demonstrated steam oxidation behavior comparable to Zircaloy-4. Sample weight gain and oxide layer thickness measurements were performed on oxidized E110 specimens and compared to oxygen pickup and oxide layer thickness calculations using the Cathcart-Pawel correlation. Our study shows that the sponge-based E110 follows the parabolic law at temperatures above 1015 °C. At or below 1015 °C, the oxidation rate was very low when compared to Zircaloy-4 and can be represented by a cubic expression. No breakaway oxidation was observed at 1000 °C for oxidation times up to 10,000 s. Arrhenius expressions are given to describe the parabolic rate constants at temperatures above 1015 °C and cubic rate constants are provided for temperatures below 1015 °C. The weight gains calculated by our equations are in excellent agreement with the measured sample weight gains at all test temperatures. In addition to the as-fabricated E110 cladding sample, prehydrided E110 cladding with hydrogen concentrations in the 100-150 wppm range was also investigated. The effect of hydrogen content on sponge-based E110 oxidation kinetics was minimal. No significant difference was found between as-fabricated and hydrided samples with regard to oxygen pickup and oxide layer thickness for hydrogen contents below 150 wppm.
NASA Astrophysics Data System (ADS)
Aguado, Alfredo; Roncero, Octavio; Zanchet, Alexandre; Agúndez, Marcelino; Cernicharo, José
2017-03-01
The impact of the photodissociation of HCN and HNC isomers is analyzed in different astrophysical environments. For this purpose, the individual photodissociation cross sections of HCN and HNC isomers have been calculated in the 7-13.6 eV photon energy range for a temperature of 10 K. These calculations are based on the ab initio calculation of three-dimensional adiabatic potential energy surfaces of the 21 lower electronic states. The cross sections are then obtained using a quantum wave packet calculation of the rotational transitions needed to simulate a rotational temperature of 10 K. The cross section calculated for HCN shows significant differences with respect to the experimental one, and this is attributed to the need to consider non-adiabatic transitions. Ratios between the photodissociation rates of HCN and HNC under different ultraviolet radiation fields have been computed by renormalizing the rates to the experimental value. It is found that HNC is photodissociated faster than HCN by a factor of 2.2 for the local interstellar radiation field and 9.2 for the solar radiation field, at 1 au. We conclude that to properly describe the HNC/HCN abundance ratio in astronomical environments illuminated by an intense ultraviolet radiation field, it is necessary to use different photodissociation rates for each of the two isomers, which are obtained by integrating the product of the photodissociation cross sections and ultraviolet radiation field over the relevant wavelength range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
An Adaptive Nonlinear Basal-Bolus Calculator for Patients With Type 1 Diabetes
Boiroux, Dimitri; Aradóttir, Tinna Björk; Nørgaard, Kirsten; Poulsen, Niels Kjølstad; Madsen, Henrik; Jørgensen, John Bagterp
2016-01-01
Background: Bolus calculators help patients with type 1 diabetes to mitigate the effect of meals on their blood glucose by administering a large amount of insulin at mealtime. Intraindividual changes in patients physiology and nonlinearity in insulin-glucose dynamics pose a challenge to the accuracy of such calculators. Method: We propose a method based on a continuous-discrete unscented Kalman filter to continuously track the postprandial glucose dynamics and the insulin sensitivity. We augment the Medtronic Virtual Patient (MVP) model to simulate noise-corrupted data from a continuous glucose monitor (CGM). The basal rate is determined by calculating the steady state of the model and is adjusted once a day before breakfast. The bolus size is determined by optimizing the postprandial glucose values based on an estimate of the insulin sensitivity and states, as well as the announced meal size. Following meal announcements, the meal compartment and the meal time constant are estimated, otherwise insulin sensitivity is estimated. Results: We compare the performance of a conventional linear bolus calculator with the proposed bolus calculator. The proposed basal-bolus calculator significantly improves the time spent in glucose target (P < .01) compared to the conventional bolus calculator. Conclusion: An adaptive nonlinear basal-bolus calculator can efficiently compensate for physiological changes. Further clinical studies will be needed to validate the results. PMID:27613658
Nighttime ionization by energetic particles at Wallops Island in the altitude region 120 to 200 km
NASA Technical Reports Server (NTRS)
Voss, H. D.; Smith, L. G.
1979-01-01
Five Nike Apache rockets, each including an energetic particle spectrometer and an electron density-electron temperature experiment, have been launched from Wallops Island (L = 2.6) near midnight under varying geomagnetic conditions. On the most recent of these (5 January 1978) an additional spectrometer with a broom magnet, and a 391.4 nm photometer were flown. The data from this flight indicate that the energetic particle flux consists predominantly of protons, neutral hydrogen and possibly other energetic nuclei. The energy spectrum becomes much softer and the flux more intense with increasing Kp for 10-100 keV. The pitch angle distribution at 180 km is asymmetrical with a peak at 90 deg indicating that the majority of particles are near their mirroring altitude. Ionization rates are calculated based on the measured energy spectrum and mirror height distribution. The resulting ionization rate profile is found to be nearly constant with altitude in the region 120 to 200 km. The measured energetic particle flux and calculated ionization rate from the five flights are found to vary with magnetic activity (based on the Kp and Dst indexes) in the same way as the independently derived ionization rates deduced from the electron density profile.
[Economic aspects of evidence-based metaphylaxis].
Strohmaier, W L
2006-11-01
The calculation model which we developed for the cost of stone therapy and metaphylaxis in Germany some years ago with a social health insurance company is based on estimates of stone incidence, types and recurrence rates, actual costs for stone removal, and metaphylaxis (based on data from a district of the social health care system). There are 200,000 stone recurrences per year in Germany. Costs for treatment of these stones amount to $687,000,000. Stone metaphylaxis reduces the recurrence rate by some 40%. The annual cost for stone removal could be lowered by $275,300,000. Metabolic evaluation/metaphylaxis amount to $70,100,000 per year, resulting in a net saving of $205,200,000. In 1997, there were 96 days off work per stone patient resulting in 5,800,000 days off work in Germany per year. Metaphylaxis is not only medically effective in stone formers but also can lower health care cost significantly. Although health care conditions may vary from country to country, in principle this calculation model is applicable also to other countries.
Measurements of UGR of LED light by a DSLR colorimeter
NASA Astrophysics Data System (ADS)
Hsu, Shau-Wei; Chen, Cheng-Hsien; Jiaan, Yuh-Der
2012-10-01
We have developed an image-based measurement method on UGR (unified glare rating) of interior lighting environment. A calibrated DSLR (digital single-lens reflex camera) with an ultra wide-angle lens was used to measure the luminance distribution, by which the corresponding parameters can be automatically calculated. A LED lighting was placed in a room and measured at various positions and directions to study the properties of UGR. The testing results are fitted with visual experiences and UGR principles. To further examine the results, a spectroradiometer and an illuminance meter were respectively used to measure the luminance and illuminance at the same position and orientation of the DSLR. The calculation of UGR by this image-based method may solve the problem of non-uniform luminance-distribution of LED lighting, and was studied on segmentation of the luminance graph for the calculations.
Feasibility study of palm-based fuels for hybrid rocket motor applications
NASA Astrophysics Data System (ADS)
Tarmizi Ahmad, M.; Abidin, Razali; Taha, A. Latif; Anudip, Amzaryi
2018-02-01
This paper describes the combined analysis done in pure palm-based wax that can be used as solid fuel in a hybrid rocket engine. The measurement of pure palm wax calorific value was performed using a bomb calorimeter. An experimental rocket engine and static test stand facility were established. After initial measurement and calibration, repeated procedures were performed. Instrumentation supplies carried out allow fuel regression rate measurements, oxidizer mass flow rates and stearic acid rocket motors measurements. Similar tests are also carried out with stearate acid (from palm oil by-products) dissolved with nitrocellulose and bee solution. Calculated data and experiments show that rates and regression thrust can be achieved even in pure-tested palm-based wax. Additionally, palm-based wax is mixed with beeswax characterized by higher nominal melting temperatures to increase moisturizing points to higher temperatures without affecting regression rate values. Calorie measurements and ballistic experiments were performed on this new fuel formulation. This new formulation promises driving applications in a wide range of temperatures.
Kleinsorge, F; Smetanay, K; Rom, J; Hörmansdörfer, C; Hörmannsdörfer, C; Scharf, A; Schmidt, P
2010-12-01
In 2008, 2 351 first trimester screenings were calculated by a newly developed internet database ( http:// www.firsttrimester.net ) to evaluate the risk for the presence of Down's syndrome. All data were evaluated by the conventional first trimester screening according to Nicolaides (FTS), based on the previous JOY Software, and by the advanced first trimester screening (AFS). After receiving the feedback of the karyotype as well as the rates of the correct positives, correct negatives, false positives, false negatives, the sensitivity and specificity were calculated and compared. Overall 255 cases were investigated which were analysed by both methods. These included 2 cases of Down's syndrome and one case of trisomy 18. The FTS and the AFS had a sensitivity of 100%. The specificity was 88.5% for the FTS and 93.0% for the AFS. As already shown in former studies, the higher specificity of the AFS is a result of a reduction of the false positive rate (28 to 17 cases). As a consequence of the AFS with a detection rate of 100% the rate of further invasive diagnostics in pregnant women is decreased by having 39% fewer positive tested women. © Georg Thieme Verlag KG Stuttgart · New York.
Rowan Gorilla I rigged up, heads for eastern Canada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-03-01
Designed to operate in very hostile offshore environments, the first of the Rowan Gorilla class of self-elevating drilling rigs has been towed to its drilling assignment offshore Nova Scotia. About 40% larger than other jackups, these rigs can operate in 300 ft of water, drilling holes as deep as 30,000 ft. They also feature unique high-pressure and solids control systems that are expected to improve drilling procedures and efficiencies. A quantitative formation pressure evaluation program for the Hewlett-Packard HP-41 handheld calculator computes formation pressures by three independent methods - the corrected d exponent, Bourgoyne and Young, and normalized penetration ratemore » techniques for abnormal pressure detection and computation. Based on empirically derived drilling rate equations, each of the methods can be calculated separately, without being dependent on or influenced by the results or stored data from the other two subprograms. The quantitative interpretation procedure involves establishing a normal drilling rate trend and calculating the pore pressure from the magnitude of the drilling rate trend or plotting parameter increases above the trend line. Mobil's quick, accurate program could aid drilling operators in selecting the casing point, minimizing differential sticking, maintaining the proper mud weights to avoid kicks and lost circulation, and maximizing penetration rates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ming; Albrecht, Bruce A.; Ghate, Virendra P.
This study first illustrates the utility of using the Doppler spectrum width from millimetrewavelength radar to calculate the energy dissipation rate and then to use the energy dissipation rate to study turbulence structure in a continental stratocumulus cloud. It is shown that the turbulence kinetic energy dissipation rate calculated from the radar-measured Doppler spectrum width agrees well with that calculated from the Doppler velocity power spectrum. During the 16-h stratocumulus cloud event, the small-scale turbulence contributes 40%of the total velocity variance at cloud base, 50% at normalized cloud depth=0.8 and 70% at cloud top, which suggests that small-scale turbulence playsmore » a critical role near the cloud top where the entrainment and cloud-top radiative cooling act. The 16-h mean vertical integral length scale decreases from about 160 m at cloud base to 60 m at cloud top, and this signifies that the larger scale turbulence dominates around cloud base whereas the small-scale turbulence dominates around cloud top. The energy dissipation rate, total variance and squared spectrum width exhibit diurnal variations, but unlike marine stratocumulus they are high during the day and lowest around sunset at all levels; energy dissipation rates increase at night with the intensification of the cloud-top cooling. In the normalized coordinate system, the averaged coherent structure of updrafts is characterized by low energy dissipation rates in the updraft core and higher energy dissipation rates surround the updraft core at the top and along the edges. In contrast, the energy dissipation rate is higher inside the downdraft core indicating that the downdraft core is more turbulent. The turbulence around the updraft is weaker at night and stronger during the day; the opposite is true around the downdraft. This behaviour indicates that the turbulence in the downdraft has a diurnal cycle similar to that observed in marine stratocumuluswhereas the turbulence diurnal cycle in the updraft is reversed. For both updraft and downdraft, the maximum energy dissipation rate occurs at a cloud depth=0.8 where the maximum reflectivity and air acceleration or deceleration are observed. Resolved turbulence dominates near cloud base whereas unresolved turbulence dominates near cloud top. Similar to the unresolved turbulence, the resolved turbulence described by the radial velocity variance is higher in the downdraft than in the updraft. The impact of the surface heating on the resolved turbulence in the updraft decreases with height and diminishes around the cloud top. In both updrafts and downdrafts, the resolved turbulence increases with height and reaches a maximum at cloud depth=0.4 and then decreases to the cloud top; the resolved turbulence near cloud top, just as the unresolved turbulence, is mostly due to the cloud-top radiative cooling.« less
NASA Astrophysics Data System (ADS)
Matsuda, Norihiro; Izumi, Yuichi; Yamanaka, Yoshiyuki; Gandou, Toshiyuki; Yamada, Masaaki; Oishi, Koji
2017-09-01
Measurements of reaction rates by secondary neutrons produced from beam losses by 17-MeV protons are conducted at a compact cyclotron facility with the foil activation method. The experimentally obtained distribution of the reaction rates of 197Au (n, γ) 198Au on the concrete walls suggests that a target and an electrostatic deflector as machine components for beam extraction of the compact cyclotron are principal beam loss points. The measurements are compared with calculations by the Monte Carlo code: PHITS. The calculated results based on the beam losses are good agreements with the measured ones within 21%. In this compact cyclotron facility, exponential attenuations with the distance from the electrostatic deflector in the distributions of the measured reaction rates were observed, which was looser than that by the inverse square of distance.
Cho, Sung Youn; Chae, Soo-Won; Choi, Kui Won; Seok, Hyun Kwang; Han, Hyung Seop; Yang, Seok Jo; Kim, Young Yul; Kim, Jong Tac; Jung, Jae Young; Assad, Michel
2012-08-01
In this study, a newly developed Mg-Ca-Zn alloy for low degradation rate and surface erosion properties was evaluated. The compressive, tensile, and fatigue strength were measured before implantation. The degradation behavior was evaluated by analyzing the microstructure and local hardness of the explanted specimen. Mean and maximum degradation rates were measured using micro CT equipment from 4-, 8-, and 16- week explants, and the alloy was shown to display surface erosion properties. Based on these characteristics, the average and minimum load bearing capacities in tension, compression, and bending modes were calculated. According to the degradation rate and references of recommended dietary intakes (RDI), the Mg-Ca-Zn alloy appears to be safe for human use. Copyright © 2012 Wiley Periodicals, Inc.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Is there a predetermined cap or limit on indirect cost rates or a fixed formula for calculating indirect cost rates? 1000.141 Section 1000.141 Indians OFFICE... cap or limit on indirect cost rates or a fixed formula for calculating indirect cost rates? No...
Code of Federal Regulations, 2010 CFR
2010-01-01
... calculated under this appendix by the savings association for one or more exposures is not commensurate with... one or more exposures, provided that: (1) The savings association can demonstrate on an ongoing basis... this context, backtesting is one form of out-of-sample testing. Bank holding company is defined in...
Low but Increasing Prevalence of Autism Spectrum Disorders in a French Area from Register-Based Data
ERIC Educational Resources Information Center
van Bakel, Marit Maria; Delobel-Ayoub, Malika; Cans, Christine; Assouline, Brigitte; Jouk, Pierre-Simon; Raynaud, Jean-Philippe; Arnaud, Catherine
2015-01-01
Register-based prevalence rates of childhood autism (CA), Asperger's syndrome (AS) and other autism spectrum disorders (ASD) were calculated among children aged 7 years old of the 1997-2003 birth cohorts, living in four counties in France. The proportion of children presenting comorbidities was reported. 1123 children with ASD were recorded (M/F…
A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.
Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei
2017-05-18
The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
75 FR 35672 - Changes in Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
...This interim rule lists communities where modification of the Base (1% annual-chance) Flood Elevations (BFEs) is appropriate because of new scientific or technical data. New flood insurance premium rates will be calculated from the modified BFEs for new buildings and their contents.
42 CFR 422.300 - Basis and scope.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for making payments to Medicare Advantage (MA) organizations offering local and regional MA plans, including calculation of MA capitation rates and benchmarks, conditions under which payment is based on plan....458 in subpart J for rules on risk sharing payments to MA regional organizations. ...
NASA Astrophysics Data System (ADS)
Clamens, Olivier; Lecerf, Johann; Hudelot, Jean-Pascal; Duc, Bertrand; Cadiou, Thierry; Blaise, Patrick; Biard, Bruno
2018-01-01
CABRI is an experimental pulse reactor, funded by the French Nuclear Safety and Radioprotection Institute (IRSN) and operated by CEA at the Cadarache research center. It is designed to study fuel behavior under RIA conditions. In order to produce the power transients, reactivity is injected by depressurization of a neutron absorber (3He) situated in transient rods inside the reactor core. The shapes of power transients depend on the total amount of reactivity injected and on the injection speed. The injected reactivity can be calculated by conversion of the 3He gas density into units of reactivity. So, it is of upmost importance to properly master gas density evolution in transient rods during a power transient. The 3He depressurization was studied by CFD calculations and completed with measurements using pressure transducers. The CFD calculations show that the density evolution is slower than the pressure drop. Surrogate models were built based on CFD calculations and validated against preliminary tests in the CABRI transient system. Studies also show that it is harder to predict the depressurization during the power transients because of neutron/3He capture reactions that induce a gas heating. This phenomenon can be studied by a multiphysics approach based on reaction rate calculation thanks to Monte Carlo code and study the resulting heating effect with the validated CFD simulation.
Introducing GFWED: The Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.
CINE: Comet INfrared Excitation
NASA Astrophysics Data System (ADS)
de Val-Borro, Miguel; Cordiner, Martin A.; Milam, Stefanie N.; Charnley, Steven B.
2017-08-01
CINE calculates infrared pumping efficiencies that can be applied to the most common molecules found in cometary comae such as water, hydrogen cyanide or methanol. One of the main mechanisms for molecular excitation in comets is the fluorescence by the solar radiation followed by radiative decay to the ground vibrational state. This command-line tool calculates the effective pumping rates for rotational levels in the ground vibrational state scaled by the heliocentric distance of the comet. Fluorescence coefficients are useful for modeling rotational emission lines observed in cometary spectra at sub-millimeter wavelengths. Combined with computational methods to solve the radiative transfer equations based, e.g., on the Monte Carlo algorithm, this model can retrieve production rates and rotational temperatures from the observed emission spectrum.
Feasibility of satellite quantum key distribution
NASA Astrophysics Data System (ADS)
Bonato, C.; Tomaello, A.; Da Deppo, V.; Naletto, G.; Villoresi, P.
2009-04-01
In this paper, we present a novel analysis of the feasibility of quantum key distribution between a LEO satellite and a ground station. First of all, we study signal propagation through a turbulent atmosphere for uplinks and downlinks, discussing the contribution of beam spreading and beam wandering. Then we introduce a model for the background noise of the channel during night-time and day-time, calculating the signal-to-noise ratio for different configurations. We also discuss the expected error-rate due to imperfect polarization compensation in the channel. Finally, we calculate the expected key generation rate of a secure key for different configurations (uplink, downlink) and for different protocols (BB84 with and without decoy states, entanglement-based Ekert91 protocol).
A Naturally-Calibrated Flow Law for Quartz
NASA Astrophysics Data System (ADS)
Lusk, A. D.; Platt, J. P.
2017-12-01
Flow laws for power-law behavior of quartz deforming by crystal-plastic processes with grain size sensitive creep included take the general form: ė = A σn f(H2O) exp(-Q/RT) dmWhere A - prefactor; σ - differential stress; n - stress exponent; f(H2O) - water fugacity; Q - activation energy; R - gas constant; T - temperature (K); d - grain size sensitivity raised to power m. Assuming the dynamically recrystallized grain size for quartz follows the peizometric relationship, substitute dm = (K σ-p)m, where K - piezometric constant; σ - differential stress; p - piezometric exponent. Rearranging the above flow law: ė = A K σ(n-pm) f(H2O) exp(-Q/RT)We use deformation temperatures, paleo-stresses, and strain rates calculated from rocks deformed in the Caledonian Orogeny, NW Scotland, along with existing experimental data, to compare naturally-calibrated values of stress exponent (n-pm) and activation energy (Q) to those determined experimentally. Microstructures preserved in the naturally-strained rocks closely resemble those produced by experimental work, indicating that quartz was deformed by the same mechanism(s). These observations validate the use of predetermined values for A as well as the addition of experimental data to calculate Q. Values for f(H2O) are based on calculated pressure and temperature conditions. Using the abovementioned constraints, we compare results, discuss challenges, and explore implications of naturally- vs. experimentally-derived flow laws for dislocation creep in quartz. Rocks used for this study include quartzite and quartz-rich psammite of the Cambrian-Ordovician shelf sequence and tectonically overlying Moine Supergroup. In both cases, quartz is likely the primary phase that controlled rheological behavior. We use the empirically derived piezometer for the dynamically recrystallized grain size of quartz to calculate the magnitude of differential stress, along with the Ti-in-quartz thermobarometer and the c-axis opening angle thermometer to determine temperatures of deformation. Tensor strain rates are calculated from plate convergence rate, based on total displacement and duration of thrusting within the Moine thrust zone, and shear zone thickness calculated from four detailed structural and microstructural transects taken parallel to the direction of displacement.
Determining the ventilation and aerosol deposition rates from routine indoor-air measurements.
Halios, Christos H; Helmis, Costas G; Deligianni, Katerina; Vratolis, Sterios; Eleftheriadis, Konstantinos
2014-01-01
Measurement of air exchange rate provides critical information in energy and indoor-air quality studies. Continuous measurement of ventilation rates is a rather costly exercise and requires specific instrumentation. In this work, an alternative methodology is proposed and tested, where the air exchange rate is calculated by utilizing indoor and outdoor routine measurements of a common pollutant such as SO2, whereas the uncertainties induced in the calculations are analytically determined. The application of this methodology is demonstrated, for three residential microenvironments in Athens, Greece, and the results are also compared against ventilation rates calculated from differential pressure measurements. The calculated time resolved ventilation rates were applied to the mass balance equation to estimate the particle loss rate which was found to agree with literature values at an average of 0.50 h(-1). The proposed method was further evaluated by applying a mass balance numerical model for the calculation of the indoor aerosol number concentrations, using the previously calculated ventilation rate, the outdoor measured number concentrations and the particle loss rates as input values. The model results for the indoors' concentrations were found to be compared well with the experimentally measured values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozaki, Toshiro, E-mail: ganronbun@amail.plala.or.jp; Seki, Hiroshi; Shiina, Makoto
2009-09-15
The purpose of the present study was to elucidate a method for predicting the intrahepatic arteriovenous shunt rate from computed tomography (CT) images and biochemical data, instead of from arterial perfusion scintigraphy, because adverse exacerbated systemic effects may be induced in cases where a high shunt rate exists. CT and arterial perfusion scintigraphy were performed in patients with liver metastases from gastric or colorectal cancer. Biochemical data and tumor marker levels of 33 enrolled patients were measured. The results were statistically verified by multiple regression analysis. The total metastatic hepatic tumor volume (V{sub metastasized}), residual hepatic parenchyma volume (V{sub residual};more » calculated from CT images), and biochemical data were treated as independent variables; the intrahepatic arteriovenous (IHAV) shunt rate (calculated from scintigraphy) was treated as a dependent variable. The IHAV shunt rate was 15.1 {+-} 11.9%. Based on the correlation matrixes, the best correlation coefficient of 0.84 was established between the IHAV shunt rate and V{sub metastasized} (p < 0.01). In the multiple regression analysis with the IHAV shunt rate as the dependent variable, the coefficient of determination (R{sup 2}) was 0.75, which was significant at the 0.1% level with two significant independent variables (V{sub metastasized} and V{sub residual}). The standardized regression coefficients ({beta}) of V{sub metastasized} and V{sub residual} were significant at the 0.1 and 5% levels, respectively. Based on this result, we can obtain a predicted value of IHAV shunt rate (p < 0.001) using CT images. When a high shunt rate was predicted, beneficial and consistent clinical monitoring can be initiated in, for example, hepatic arterial infusion chemotherapy.« less
NASA Astrophysics Data System (ADS)
Dalman, E.; Taylor, M. H.; Veloza-fajardo, G.; Mora, A.
2014-12-01
Northwest South America is actively deforming through the interaction between the Nazca, South American, and Caribbean plates. Though the Colombian Andes are well studied, much uncertainty remains in the rate of Quaternary deformation along the east directed frontal thrust faults hundreds of kilometers in board from the subduction zones. The eastern foothills of the Eastern Cordillera (EC) preserve deformed landforms, allowing us to quantify incision rates. Using 10Be in-situ terrestrial cosmogenic nuclide (TCN) geochronology, we dated 2 deformed fluvial terraces in the hanging wall of the Guaicaramo thrust fault. From the 10Be concentration and terrace profile relative to local base level, we calculated incision rates. We present a reconstructed slip history of the Guaicaramo thrust fault and its Quaternary slip rate. Furthermore, to quantify the regional Quaternary deformation, we look at the fluvial response to tectonic uplift. Approximately 20 streams along the eastern foothills of the Eastern Cordillera (EC) were studied using a digital elevation model (DEM). From the DEM, longitudinal profiles were created and normalized channel steepness (Ksn) values calculated from plots of drainage area vs. slope. Knickpoints in the longitudinal profiles can record transient perturbations or differential uplift. Calculated Ksn values indicate that the EC is experiencing high rates of uplift, with the highest mean Ksn values occurring in the Cocuy region. Mean channel steepness values along strike of the foothills are related to increasing uplift rates from south to north. In contrast, we suggest that high channel steepness values in the south appear to be controlled by high rates of annual precipitation.
Galactic and solar radiation exposure to aircrew during a solar cycle.
Lewis, B J; Bennett, L G I; Green, A R; McCall, M J; Ellaschuk, B; Butler, A; Pierre, M
2002-01-01
An on-going investigation using a tissue-equivalent proportional counter (TEPC) has been carried out to measure the ambient dose equivalent rate of the cosmic radiation exposure of aircrew during a solar cycle. A semi-empirical model has been derived from these data to allow for the interpolation of the dose rate for any global position. The model has been extended to an altitude of up to 32 km with further measurements made on board aircraft and several balloon flights. The effects of changing solar modulation during the solar cycle are characterised by correlating the dose rate data to different solar potential models. Through integration of the dose-rate function over a great circle flight path or between given waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAIRE) has been further developed for estimation of the route dose from galactic cosmic radiation exposure. This estimate is provided in units of ambient dose equivalent as well as effective dose, based on E/H x (10) scaling functions as determined from transport code calculations with LUIN and FLUKA. This experimentally based treatment has also been compared with the CARI-6 and EPCARD codes that are derived solely from theoretical transport calculations. Using TEPC measurements taken aboard the International Space Station, ground based neutron monitoring, GOES satellite data and transport code analysis, an empirical model has been further proposed for estimation of aircrew exposure during solar particle events. This model has been compared to results obtained during recent solar flare events.
Gamow-Teller Strength Distributions for pf-shell Nuclei and its Implications in Astrophysics
NASA Astrophysics Data System (ADS)
Rahman, M.-U.; Nabi, J.-U.
2009-08-01
The {pf}-shell nuclei are present in abundance in the pre-supernova and supernova phases and these nuclei are considered to play an important role in the dynamics of core collapse supernovae. The B(GT) values are calculated for the {pf}-shell nuclei 55Co and 57Zn using the pn-QRPA theory. The calculated B(GT) strengths have differences with earlier reported shell model calculations, however, the results are in good agreement with the experimental data. These B(GT) strengths are used in the calculations of weak decay rates which play a decisive role in the core-collapse supernovae dynamics and nucleosynthesis. Unlike previous calculations the so-called Brink's hypothesis is not assumed in the present calculation which leads to a more realistic estimate of weak decay rates. The electron capture rates are calculated over wide grid of temperature ({0.01} × 109 - 30 × 109 K) and density (10-1011 g-cm-3). Our rates are enhanced compared to the reported shell model rates. This enhancement is attributed partly to the liberty of selecting a huge model space, allowing consideration of many more excited states in the present electron capture rates calculations.
NASA Technical Reports Server (NTRS)
Sarracino, Marcello
1941-01-01
The present article deals with what is considered to be a simpler and more accurate method of determining, from the results of bench tests under approved rating conditions, the power at altitude of a supercharged aircraft engine, without application of correction formulas. The method of calculating the characteristics at altitude, of supercharged engines, based on the consumption of air, is a more satisfactory and accurate procedure, especially at low boost pressures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brickstad, B.; Bergman, M.
A computerized procedure has been developed that predicts the growth of an initial circumferential surface crack through a pipe and further on to failure. The crack growth mechanism can either be fatigue or stress corrosion. Consideration is taken to complex crack shapes and for the through-wall cracks, crack opening areas and leak rates are also calculated. The procedure is based on a large number of three-dimensional finite element calculations of cracked pipes. The results from these calculations are stored in a database from which the PC-program, denoted LBBPIPE, reads all necessary information. In this paper, a sensitivity analysis is presentedmore » for cracked pipes subjected to both stress corrosion and vibration fatigue.« less
Aromatic hydroxylation by cytochrome P450: model calculations of mechanism and substituent effects.
Bathelt, Christine M; Ridder, Lars; Mulholland, Adrian J; Harvey, Jeremy N
2003-12-10
The mechanism and selectivity of aromatic hydroxylation by cytochrome P450 enzymes is explored using new B3LYP density functional theory computations. The calculations, using a realistic porphyrin model system, show that rate-determining addition of compound I to an aromatic carbon atom proceeds via a transition state with partial radical and cationic character. Reactivity is shown to depend strongly on ring substituents, with both electron-withdrawing and -donating groups strongly decreasing the addition barrier in the para position, and it is shown that the calculated barrier heights can be reproduced by a new dual-parameter equation based on radical and cationic Hammett sigma parameters.