Sample records for base case calculation

  1. Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety

    NASA Astrophysics Data System (ADS)

    Mikula, J. F. Kip

    2005-12-01

    This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.

  2. An application of boundary element method calculations to hearing aid systems: The influence of the human head

    NASA Astrophysics Data System (ADS)

    Rasmussen, Karsten B.; Juhl, Peter

    2004-05-01

    Boundary element method (BEM) calculations are used for the purpose of predicting the acoustic influence of the human head in two cases. In the first case the sound source is the mouth and in the second case the sound is plane waves arriving from different directions in the horizontal plane. In both cases the sound field is studied in relation to two positions above the right ear being representative of hearing aid microphone positions. Both cases are relevant for hearing aid development. The calculations are based upon a direct BEM implementation in Matlab. The meshing is based on the original geometrical data files describing the B&K Head and Torso Simulator 4128 combined with a 3D scan of the pinna.

  3. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  4. Preliminary calculations related to the accident at Three Mile Island

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchner, W.L.; Stevenson, M.G.

    This report discusses preliminary studies of the Three Mile Island Unit 2 (TMI-2) accident based on available methods and data. The work reported includes: (1) a TRAC base case calculation out to 3 hours into the accident sequence; (2) TRAC parametric calculations, these are the same as the base case except for a single hypothetical change in the system conditions, such as assuming the high pressure injection (HPI) system operated as designed rather than as in the accident; (3) fuel rod cladding failure, cladding oxidation due to zirconium metal-steam reactions, hydrogen release due to cladding oxidation, cladding ballooning, cladding embrittlement,more » and subsequent cladding breakup estimates based on TRAC calculated cladding temperatures and system pressures. Some conclusions of this work are: the TRAC base case accident calculation agrees very well with known system conditions to nearly 3 hours into the accident; the parametric calculations indicate that, loss-of-core cooling was most influenced by the throttling of High-Pressure Injection (HPI) flows, given the accident initiating events and the pressurizer electromagnetic-operated valve (EMOV) failing to close as designed; failure of nearly all the rods and gaseous fission product gas release from the failed rods is predicted to have occurred at about 2 hours and 30 minutes; cladding oxidation (zirconium-steam reaction) up to 3 hours resulted in the production of approximately 40 kilograms of hydrogen.« less

  5. CDMBE: A Case Description Model Based on Evidence

    PubMed Central

    Zhu, Jianlin; Yang, Xiaoping; Zhou, Jing

    2015-01-01

    By combining the advantages of argument map and Bayesian network, a case description model based on evidence (CDMBE), which is suitable to continental law system, is proposed to describe the criminal cases. The logic of the model adopts the credibility logical reason and gets evidence-based reasoning quantitatively based on evidences. In order to consist with practical inference rules, five types of relationship and a set of rules are defined to calculate the credibility of assumptions based on the credibility and supportability of the related evidences. Experiments show that the model can get users' ideas into a figure and the results calculated from CDMBE are in line with those from Bayesian model. PMID:26421006

  6. The Band Structure of Polymers: Its Calculation and Interpretation. Part 2. Calculation.

    ERIC Educational Resources Information Center

    Duke, B. J.; O'Leary, Brian

    1988-01-01

    Details ab initio crystal orbital calculations using all-trans-polyethylene as a model. Describes calculations based on various forms of translational symmetry. Compares these calculations with ab initio molecular orbital calculations discussed in a preceding article. Discusses three major approximations made in the crystal case. (CW)

  7. Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study.

    PubMed

    Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald

    2015-03-21

    Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.

  8. Calculations of Hubbard U from first-principles

    NASA Astrophysics Data System (ADS)

    Aryasetiawan, F.; Karlsson, K.; Jepsen, O.; Schönberger, U.

    2006-09-01

    The Hubbard U of the 3d transition metal series as well as SrVO3 , YTiO3 , Ce, and Gd has been estimated using a recently proposed scheme based on the random-phase approximation. The values obtained are generally in good accord with the values often used in model calculations but for some cases the estimated values are somewhat smaller than those used in the literature. We have also calculated the frequency-dependent U for some of the materials. The strong frequency dependence of U in some of the cases considered in this paper suggests that the static value of U may not be the most appropriate one to use in model calculations. We have also made comparison with the constrained local density approximation (LDA) method and found some discrepancies in a number of cases. We emphasize that our scheme and the constrained local density approximation LDA method theoretically ought to give similar results and the discrepancies may be attributed to technical difficulties in performing calculations based on currently implemented constrained LDA schemes.

  9. Effect of costing methods on unit cost of hospital medical services.

    PubMed

    Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya

    2007-04-01

    To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.

  10. Preliminary evaluation of the dosimetric accuracy of cone-beam computed tomography for cases with respiratory motion

    NASA Astrophysics Data System (ADS)

    Kim, Dong Wook; Bae, Sunhyun; Chung, Weon Kuu; Lee, Yoonhee

    2014-04-01

    Cone-beam computed tomography (CBCT) images are currently used for patient positioning and adaptive dose calculation; however, the degree of CBCT uncertainty in cases of respiratory motion remains an interesting issue. This study evaluated the uncertainty of CBCT-based dose calculations for a moving target. Using a phantom, we estimated differences in the geometries and the Hounsfield units (HU) between CT and CBCT. The calculated dose distributions based on CT and CBCT images were also compared using a radiation treatment planning system, and the comparison included cases with respiratory motion. The geometrical uncertainties of the CT and the CBCT images were less than 0.15 cm. The HU differences between CT and CBCT images for standard-dose-head, high-quality-head, normal-pelvis, and low-dose-thorax modes were 31, 36, 23, and 33 HU, respectively. The gamma (3%, 0.3 cm)-dose distribution between CT and CBCT was greater than 1 in 99% of the area. The gamma-dose distribution between CT and CBCT during respiratory motion was also greater than 1 in 99% of the area. The uncertainty of the CBCT-based dose calculation was evaluated for cases with respiratory motion. In conclusion, image distortion due to motion did not significantly influence dosimetric parameters.

  11. Monte Carlo dose calculations for high-dose-rate brachytherapy using GPU-accelerated processing.

    PubMed

    Tian, Z; Zhang, M; Hrycushko, B; Albuquerque, K; Jiang, S B; Jia, X

    2016-01-01

    Current clinical brachytherapy dose calculations are typically based on the Association of American Physicists in Medicine Task Group report 43 (TG-43) guidelines, which approximate patient geometry as an infinitely large water phantom. This ignores patient and applicator geometries and heterogeneities, causing dosimetric errors. Although Monte Carlo (MC) dose calculation is commonly recognized as the most accurate method, its associated long computational time is a major bottleneck for routine clinical applications. This article presents our recent developments of a fast MC dose calculation package for high-dose-rate (HDR) brachytherapy, gBMC, built on a graphics processing unit (GPU) platform. gBMC-simulated photon transport in voxelized geometry with physics in (192)Ir HDR brachytherapy energy range considered. A phase-space file was used as a source model. GPU-based parallel computation was used to simultaneously transport multiple photons, one on a GPU thread. We validated gBMC by comparing the dose calculation results in water with that computed TG-43. We also studied heterogeneous phantom cases and a patient case and compared gBMC results with Acuros BV results. Radial dose function in water calculated by gBMC showed <0.6% relative difference from that of the TG-43 data. Difference in anisotropy function was <1%. In two heterogeneous slab phantoms and one shielded cylinder applicator case, average dose discrepancy between gBMC and Acuros BV was <0.87%. For a tandem and ovoid patient case, good agreement between gBMC and Acruos BV results was observed in both isodose lines and dose-volume histograms. In terms of the efficiency, it took ∼47.5 seconds for gBMC to reach 0.15% statistical uncertainty within the 5% isodose line for the patient case. The accuracy and efficiency of a new GPU-based MC dose calculation package, gBMC, for HDR brachytherapy make it attractive for clinical applications. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  12. TH-A-19A-11: Validation of GPU-Based Monte Carlo Code (gPMC) Versus Fully Implemented Monte Carlo Code (TOPAS) for Proton Radiation Therapy: Clinical Cases Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D; Schuemann, J; Dowdell, S

    Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavitiesmore » and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.« less

  13. Updated Global Burden of Cholera in Endemic Countries

    PubMed Central

    Ali, Mohammad; Nelson, Allyson R.; Lopez, Anna Lena; Sack, David A.

    2015-01-01

    Background The global burden of cholera is largely unknown because the majority of cases are not reported. The low reporting can be attributed to limited capacity of epidemiological surveillance and laboratories, as well as social, political, and economic disincentives for reporting. We previously estimated 2.8 million cases and 91,000 deaths annually due to cholera in 51 endemic countries. A major limitation in our previous estimate was that the endemic and non-endemic countries were defined based on the countries’ reported cholera cases. We overcame the limitation with the use of a spatial modelling technique in defining endemic countries, and accordingly updated the estimates of the global burden of cholera. Methods/Principal Findings Countries were classified as cholera endemic, cholera non-endemic, or cholera-free based on whether a spatial regression model predicted an incidence rate over a certain threshold in at least three of five years (2008-2012). The at-risk populations were calculated for each country based on the percent of the country without sustainable access to improved sanitation facilities. Incidence rates from population-based published studies were used to calculate the estimated annual number of cases in endemic countries. The number of annual cholera deaths was calculated using inverse variance-weighted average case-fatality rate (CFRs) from literature-based CFR estimates. We found that approximately 1.3 billion people are at risk for cholera in endemic countries. An estimated 2.86 million cholera cases (uncertainty range: 1.3m-4.0m) occur annually in endemic countries. Among these cases, there are an estimated 95,000 deaths (uncertainty range: 21,000-143,000). Conclusion/Significance The global burden of cholera remains high. Sub-Saharan Africa accounts for the majority of this burden. Our findings can inform programmatic decision-making for cholera control. PMID:26043000

  14. Variation Among Internet Based Calculators in Predicting Spontaneous Resolution of Vesicoureteral Reflux

    PubMed Central

    Routh, Jonathan C.; Gong, Edward M.; Cannon, Glenn M.; Yu, Richard N.; Gargollo, Patricio C.; Nelson, Caleb P.

    2010-01-01

    Purpose An increasing number of parents and practitioners use the Internet for health related purposes, and an increasing number of models are available on the Internet for predicting spontaneous resolution rates for children with vesi-coureteral reflux. We sought to determine whether currently available Internet based calculators for vesicoureteral reflux resolution produce systematically different results. Materials and Methods Following a systematic Internet search we identified 3 Internet based calculators of spontaneous resolution rates for children with vesicoureteral reflux, of which 2 were academic affiliated and 1 was industry affiliated. We generated a random cohort of 100 hypothetical patients with a wide range of clinical characteristics and entered the data on each patient into each calculator. We then compared the results from the calculators in terms of mean predicted resolution probability and number of cases deemed likely to resolve at various cutoff probabilities. Results Mean predicted resolution probabilities were 41% and 36% (range 31% to 41%) for the 2 academic affiliated calculators and 33% for the industry affiliated calculator (p = 0.02). For some patients the calculators produced markedly different probabilities of spontaneous resolution, in some instances ranging from 24% to 89% for the same patient. At thresholds greater than 5%, 10% and 25% probability of spontaneous resolution the calculators differed significantly regarding whether cases would resolve (all p < 0.0001). Conclusions Predicted probabilities of spontaneous resolution of vesicoureteral reflux differ significantly among Internet based calculators. For certain patients, particularly those with a lower probability of spontaneous resolution, these differences can significantly influence clinical decision making. PMID:20172550

  15. Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment

    NASA Astrophysics Data System (ADS)

    Barnett, D. A., Jr.

    1991-02-01

    An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.

  16. On the validity of microscopic calculations of double-quantum-dot spin qubits based on Fock-Darwin states

    NASA Astrophysics Data System (ADS)

    Chan, GuoXuan; Wang, Xin

    2018-04-01

    We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.

  17. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.

  18. Sorting variables for each case: a new algorithm to calculate injury severity score (ISS) using SPSS-PC.

    PubMed

    Linn, S

    One of the more often used measures of multiple injuries is the injury severity score (ISS). Determination of the ISS is based on the abbreviated injury scale (AIS). This paper suggests a new algorithm to sort the AISs for each case and calculate ISS. The program uses unsorted abbreviated injury scale (AIS) levels for each case and rearranges them in descending order. The first three sorted AISs representing the three most severe injuries of a person are then used to calculate injury severity score (ISS). This algorithm should be useful for analyses of clusters of injuries especially when more patients have multiple injuries.

  19. [Cost analysis of intraoperative neurophysiological monitoring (IOM)].

    PubMed

    Kombos, T; Suess, O; Brock, M

    2002-01-01

    A number of studies demonstrate that a significant reduction of postoperative neurological deficits can be achieved by applying intraoperative neurophysiological monitoring (IOM) methods. A cost analysis of IOM is imperative considering the strained financial situation in the public health services. The calculation model presented here comprises two cost components: material and personnel. The material costs comprise consumer goods and depreciation of capital goods. The computation base was 200 IOM cases per year. Consumer goods were calculated for each IOM procedure respectively. The following constellation served as a basis for calculating personnel costs: (a) a medical technician (salary level BAT Vc) for one hour per case; (b) a resident (BAT IIa) for the entire duration of the measurement, and (c) a senior resident (BAT Ia) only for supervision. An IOM device consisting of an 8-channel preamplifier, an electrical and acoustic stimulator and special software costs 66,467 euros on the average. With an annual depreciation of 20%, the costs are 13,293 euros per year. This amounts to 66.46 euros per case for the capital goods. For reusable materials a sum of 0.75 euro; per case was calculated. Disposable materials were calculate for each procedure respectively. Total costs of 228.02 euro; per case were,s a sum of 0.75 euros per case was calculated. Disposable materials were calculate for each procedure respectively. Total costs of 228.02 euros per case were, calculated for surgery on the peripheral nervous system. They amount to 196.40 euros per case for spinal interventions and to 347.63 euros per case for more complex spinal operations. Operations in the cerebellopontine angle and brain stem cost 376.63 euros and 397.33 euros per case respectively. IOM costs amount to 328.03 euros per case for surgical management of an intracranial aneurysm and to 537.15 euros per case for functional interventions. Expenses run up to 833.63 euros per case for operations near the motor cortex and to 117.65 euros per case for intraoperative speech monitoring. Costs for inpatient medical rehabilitation have increased considerably in recent years. In view of the financial situation, it is necessary to reduce postoperative morbidity and the costs it involves. IOM leads to a reduction of morbidity. The costs for IOM calculated here justify its routine application in view of the legal and socioeconomic consequences of surgery-related neurological deficits.

  20. Index cost estimate based BIM method - Computational example for sports fields

    NASA Astrophysics Data System (ADS)

    Zima, Krzysztof

    2017-07-01

    The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.

  1. Theoretical calculation of coherent Laue-case conversion between x-rays and ALPs for an x-ray light-shining-through-a-wall experiment

    NASA Astrophysics Data System (ADS)

    Yamaji, T.; Yamazaki, T.; Tamasaku, K.; Namba, T.

    2017-12-01

    Single crystals have high atomic electric fields as much as 1 011 V /m , which correspond to magnetic fields of ˜103 T . These fields can be utilized to convert x-rays into axionlike particles (ALPs) coherently similar to x-ray diffraction. In this paper, we perform the first theoretical calculation of the Laue-case conversion in crystals based on the Darwin dynamical theory of x-ray diffraction. The calculation shows that the Laue-case conversion has longer interaction length than the Bragg case, and that ALPs in the keV range can be resonantly converted by tuning an incident angle of x-rays. ALPs with mass up to O (10 keV ) can be searched by light-shining-through-a-wall (LSW) experiments at synchrotron x-ray facilities.

  2. Ontology-Based Exchange and Immediate Application of Business Calculation Definitions for Online Analytical Processing

    NASA Astrophysics Data System (ADS)

    Kehlenbeck, Matthias; Breitner, Michael H.

    Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.

  3. Poster - Thur Eve - 68: Evaluation and analytical comparison of different 2D and 3D treatment planning systems using dosimetry in anthropomorphic phantom.

    PubMed

    Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S

    2012-07-01

    The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.

  4. Test case for VVER-1000 complex modeling using MCU and ATHLET

    NASA Astrophysics Data System (ADS)

    Bahdanovich, R. B.; Bogdanova, E. V.; Gamtsemlidze, I. D.; Nikonov, S. P.; Tikhomirov, G. V.

    2017-01-01

    The correct modeling of processes occurring in the fuel core of the reactor is very important. In the design and operation of nuclear reactors it is necessary to cover the entire range of reactor physics. Very often the calculations are carried out within the framework of only one domain, for example, in the framework of structural analysis, neutronics (NT) or thermal hydraulics (TH). However, this is not always correct, as the impact of related physical processes occurring simultaneously, could be significant. Therefore it is recommended to spend the coupled calculations. The paper provides test case for the coupled neutronics-thermal hydraulics calculation of VVER-1000 using the precise neutron code MCU and system engineering code ATHLET. The model is based on the fuel assembly (type 2M). Test case for calculation of power distribution, fuel and coolant temperature, coolant density, etc. has been developed. It is assumed that the test case will be used for simulation of VVER-1000 reactor and in the calculation using other programs, for example, for codes cross-verification. The detailed description of the codes (MCU, ATHLET), geometry and material composition of the model and an iterative calculation scheme is given in the paper. Script in PERL language was written to couple the codes.

  5. Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC

    NASA Astrophysics Data System (ADS)

    Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina

    2016-11-01

    New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.

  6. Web-based Tsunami Early Warning System: a case study of the 2010 Kepulaunan Mentawai Earthquake and Tsunami

    NASA Astrophysics Data System (ADS)

    Ulutas, E.; Inan, A.; Annunziato, A.

    2012-06-01

    This study analyzes the response of the Global Disasters Alerts and Coordination System (GDACS) in relation to a case study: the Kepulaunan Mentawai earthquake and related tsunami, which occurred on 25 October 2010. The GDACS, developed by the European Commission Joint Research Center, combines existing web-based disaster information management systems with the aim to alert the international community in case of major disasters. The tsunami simulation system is an integral part of the GDACS. In more detail, the study aims to assess the tsunami hazard on the Mentawai and Sumatra coasts: the tsunami heights and arrival times have been estimated employing three propagation models based on the long wave theory. The analysis was performed in three stages: (1) pre-calculated simulations by using the tsunami scenario database for that region, used by the GDACS system to estimate the alert level; (2) near-real-time simulated tsunami forecasts, automatically performed by the GDACS system whenever a new earthquake is detected by the seismological data providers; and (3) post-event tsunami calculations using GCMT (Global Centroid Moment Tensor) fault mechanism solutions proposed by US Geological Survey (USGS) for this event. The GDACS system estimates the alert level based on the first type of calculations and on that basis sends alert messages to its users; the second type of calculations is available within 30-40 min after the notification of the event but does not change the estimated alert level. The third type of calculations is performed to improve the initial estimations and to have a better understanding of the extent of the possible damage. The automatic alert level for the earthquake was given between Green and Orange Alert, which, in the logic of GDACS, means no need or moderate need of international humanitarian assistance; however, the earthquake generated 3 to 9 m tsunami run-up along southwestern coasts of the Pagai Islands where 431 people died. The post-event calculations indicated medium-high humanitarian impacts.

  7. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

    PubMed

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-10-21

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  8. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Y; Li, Y; Tian, Z

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less

  9. Impact of heterogeneity-corrected dose calculation using a grid-based Boltzmann solver on breast and cervix cancer brachytherapy.

    PubMed

    Hofbauer, Julia; Kirisits, Christian; Resch, Alexandra; Xu, Yingjie; Sturdza, Alina; Pötter, Richard; Nesvacil, Nicole

    2016-04-01

    To analyze the impact of heterogeneity-corrected dose calculation on dosimetric quality parameters in gynecological and breast brachytherapy using Acuros, a grid-based Boltzmann equation solver (GBBS), and to evaluate the shielding effects of different cervix brachytherapy applicators. Calculations with TG-43 and Acuros were based on computed tomography (CT) retrospectively, for 10 cases of accelerated partial breast irradiation and 9 cervix cancer cases treated with tandem-ring applicators. Phantom CT-scans of different applicators (plastic and titanium) were acquired. For breast cases the V20Gyαβ3 to lung, the D0.1cm(3) , D1cm(3) , D2cm(3) to rib, the D0.1cm(3) , D1cm(3) , D10cm(3) to skin, and Dmax for all structures were reported. For cervix cases, the D0.1cm(3) , D2cm(3) to bladder, rectum and sigmoid, and the D50, D90, D98, V100 for the CTVHR were reported. For the phantom study, surrogates for target and organ at risk were created for a similar dose volume histogram (DVH) analysis. Absorbed dose and equivalent dose to 2 Gy fractionation (EQD2) were used for comparison. Calculations with TG-43 overestimated the dose for all dosimetric indices investigated. For breast, a decrease of ~8% was found for D10cm(3) to the skin and 5% for D2cm(3) to rib, resulting in a difference ~ -1.5 Gy EQD2 for overall treatment. Smaller effects were found for cervix cases with the plastic applicator, with up to -2% (-0.2 Gy EQD2) per fraction for organs at risk and -0.5% (-0.3 Gy EQD2) per fraction for CTVHR. The shielding effect of the titanium applicator resulted in a decrease of 2% for D2cm(3) to the organ at risk versus 0.7% for plastic. Lower doses were reported when calculating with Acuros compared to TG-43. Differences in dose parameters were larger in breast cases. A lower impact on clinical dose parameters was found for the cervix cases. Applicator material causes systematic shielding effects that can be taken into account.

  10. The modeler's influence on calculated solubilities for performance assessments at the Aspo Hard-rock Laboratory

    USGS Publications Warehouse

    Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.

    1999-01-01

    Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.

  11. Case Studies in Systems Chemistry. Final Report. [Includes Complete Case Study, Carboxylic Acid Equilibria

    ERIC Educational Resources Information Center

    Fleck, George

    This publication was produced as a teaching tool for college chemistry. The book is a text for a computer-based unit on the chemistry of acid-base titrations, and is designed for use with FORTRAN or BASIC computer systems, and with a programmable electronic calculator, in a variety of educational settings. The text attempts to present computer…

  12. GIS-based regionalized life cycle assessment: how big is small enough? Methodology and case study of electricity generation.

    PubMed

    Mutel, Christopher L; Pfister, Stephan; Hellweg, Stefanie

    2012-01-17

    We describe a new methodology for performing regionalized life cycle assessment and systematically choosing the spatial scale of regionalized impact assessment methods. We extend standard matrix-based calculations to include matrices that describe the mapping from inventory to impact assessment spatial supports. Uncertainty in inventory spatial data is modeled using a discrete spatial distribution function, which in a case study is derived from empirical data. The minimization of global spatial autocorrelation is used to choose the optimal spatial scale of impact assessment methods. We demonstrate these techniques on electricity production in the United States, using regionalized impact assessment methods for air emissions and freshwater consumption. Case study results show important differences between site-generic and regionalized calculations, and provide specific guidance for future improvements of inventory data sets and impact assessment methods.

  13. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  14. Beam Wave Considerations for Optical Link Budget Calculations

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    2016-01-01

    The bounded beam wave nature of electromagnetic radiation emanating from a finite size aperture is considered for diffraction-based link power budget calculations for an optical communications system. Unlike at radio frequency wavelengths, diffraction effects are very important at optical wavelengths. In the general case, the situation cannot be modeled by supposing isotropic radiating antennas and employing the concept of effective isotropic radiated power. It is shown here, however, that these considerations are no more difficult to treat than spherical-wave isotopic based calculations. From first principles, a general expression governing the power transfer for a collimated beam wave is derived and from this are defined the three regions of near-field, first Fresnel zone, and far-field behavior. Corresponding equations for the power transfer are given for each region. It is shown that although the well-known linear expressions for power transfer in the far-field hold for all distances between source and receiver in the radio frequency case, nonlinear behavior within the first Fresnel zone must be accounted for in the optical case at 1550 nm with typical aperture sizes at source/receiver separations less that 100 km.

  15. Cost estimation using ministerial regulation of public work no. 11/2013 in construction projects

    NASA Astrophysics Data System (ADS)

    Arumsari, Putri; Juliastuti; Khalifah Al'farisi, Muhammad

    2017-12-01

    One of the first tasks in starting a construction project is to estimate the total cost of building a project. In Indonesia there are several standards that are used to calculate the cost estimation of a project. One of the standards used in based on the Ministerial Regulation of Public Work No. 11/2013. However in a construction project, contractor often has their own cost estimation based on their own calculation. This research aimed to compare the construction project total cost using calculation based on the Ministerial Regulation of Public Work No. 11/2013 against the contractor’s calculation. Two projects were used as case study to compare the results. The projects were a 4 storey building located in Pantai Indah Kapuk area (West Jakarta) and a warehouse located in Sentul (West Java) which was built by 2 different contractors. The cost estimation from both contractors’ calculation were compared to the one based on the Ministerial Regulation of Public Work No. 11/2013. It is found that there were differences between the two calculation around 1.80 % - 3.03% in total cost, in which the cost estimation based on Ministerial Regulation was higher than the contractors’ calculations.

  16. Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.

    PubMed

    Heislbetz, Sandra; Rauhut, Guntram

    2010-03-28

    A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.

  17. Initial Assessment of a Rapid Method of Calculating CEV Environmental Heating

    NASA Technical Reports Server (NTRS)

    Pickney, John T.; Milliken, Andrew H.

    2010-01-01

    An innovative method for rapidly calculating spacecraft environmental absorbed heats in planetary orbit is described. The method employs reading a database of pre-calculated orbital absorbed heats and adjusting those heats for desired orbit parameters. The approach differs from traditional Monte Carlo methods that are orbit based with a planet centered coordinate system. The database is based on a spacecraft centered coordinated system where the range of all possible sun and planet look angles are evaluated. In an example case 37,044 orbit configurations were analyzed for average orbital heats on selected spacecraft surfaces. Calculation time was under 2 minutes while a comparable Monte Carlo evaluation would have taken an estimated 26 hours

  18. [Development and practice evaluation of blood acid-base imbalance analysis software].

    PubMed

    Chen, Bo; Huang, Haiying; Zhou, Qiang; Peng, Shan; Jia, Hongyu; Ji, Tianxing

    2014-11-01

    To develop a blood gas, acid-base imbalance analysis computer software to diagnose systematically, rapidly, accurately and automatically determine acid-base imbalance type, and evaluate the clinical application. Using VBA programming language, a computer aided diagnostic software for the judgment of acid-base balance was developed. The clinical data of 220 patients admitted to the Second Affiliated Hospital of Guangzhou Medical University were retrospectively analyzed. The arterial blood gas [pH value, HCO(3)(-), arterial partial pressure of carbon dioxide (PaCO₂)] and electrolytes included data (Na⁺ and Cl⁻) were collected. Data were entered into the software for acid-base imbalances judgment. At the same time the data generation was calculated manually by H-H compensation formula for determining the type of acid-base imbalance. The consistency of judgment results from software and manual calculation was evaluated, and the judgment time of two methods was compared. The clinical diagnosis of the types of acid-base imbalance for the 220 patients: 65 cases were normal, 90 cases with simple type, mixed type in 41 cases, and triplex type in 24 cases. The accuracy of the judgment results of the normal and triplex types from computer software compared with which were calculated manually was 100%, the accuracy of the simple type judgment was 98.9% and 78.0% for the mixed type, and the total accuracy was 95.5%. The Kappa value of judgment result from software and manual judgment was 0.935, P=0.000. It was demonstrated that the consistency was very good. The time for software to determine acid-base imbalances was significantly shorter than the manual judgment (seconds:18.14 ± 3.80 vs. 43.79 ± 23.86, t=7.466, P=0.000), so the method of software was much faster than the manual method. Software judgment can replace manual judgment with the characteristics of rapid, accurate and convenient, can improve work efficiency and quality of clinical doctors and has great clinical application promotion value.

  19. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward

    1985-01-01

    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  20. Risk Factors Analysis and Death Prediction in Some Life-Threatening Ailments Using Chi-Square Case-Based Reasoning (χ2 CBR) Model.

    PubMed

    Adeniyi, D A; Wei, Z; Yang, Y

    2018-01-30

    A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.

  1. Impact of the new nuclear decay data of ICRP publication 107 on inhalation dose coefficients for workers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manabe, K.; Endo, Akira; Eckerman, Keith F

    2010-03-01

    The impact a revision of nuclear decay data had on dose coefficients was studied using data newly published in ICRP Publication 107 (ICRP 107) and existing data from ICRP Publication 38 (ICRP 38). Committed effective dose coefficients for occupational inhalation of radionuclides were calculated using two sets of decay data with the dose and risk calculation software DCAL for 90 elements, 774 nuclides and 1572 cases. The dose coefficients based on ICRP 107 increased by over 10 % compared with those based on ICRP 38 in 98 cases, and decreased by over 10 % in 54 cases. It was foundmore » that the differences in dose coefficients mainly originated from changes in the radiation energy emitted per nuclear transformation. In addition, revisions of the half-lives, radiation types and decay modes also resulted in changes in the dose coefficients.« less

  2. Research of Litchi Diseases Diagnosis Expertsystem Based on Rbr and Cbr

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Liu, Liqun

    To conquer the bottleneck problems existing in the traditional rule-based reasoning diseases diagnosis system, such as low reasoning efficiency and lack of flexibility, etc.. It researched the integrated case-based reasoning (CBR) and rule-based reasoning (RBR) technology, and put forward a litchi diseases diagnosis expert system (LDDES) with integrated reasoning method. The method use data mining and knowledge obtaining technology to establish knowledge base and case library. It adopt rules to instruct the retrieval and matching for CBR, and use association rule and decision trees algorithm to calculate case similarity.The experiment shows that the method can increase the system's flexibility and reasoning ability, and improve the accuracy of litchi diseases diagnosis.

  3. Intelligent design of permanent magnet synchronous motor based on CBR

    NASA Astrophysics Data System (ADS)

    Li, Cong; Fan, Beibei

    2018-05-01

    Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.

  4. Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios

    NASA Astrophysics Data System (ADS)

    Kluess, D.; Mittelmeier, W.; Bader, R.

    2009-12-01

    In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.

  5. Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios

    NASA Astrophysics Data System (ADS)

    Kluess, D.; Mittelmeier, W.; Bader, R.

    2010-03-01

    In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.

  6. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less

  7. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms.

    PubMed

    Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick

    2014-10-16

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the median GTV dose provides a very robust method for treating lung lesions.

  8. SU-D-BRD-01: Cloud-Based Radiation Treatment Planning: Performance Evaluation of Dose Calculation and Plan Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Y; Kapp, D; Kim, Y

    2014-06-01

    Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computationalmore » efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1R01 CA133474) and by Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP) (Grant No.2009-00420)« less

  9. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Using Object Oriented Bayesian Networks to Model Linkage, Linkage Disequilibrium and Mutations between STR Markers

    PubMed Central

    Kling, Daniel; Egeland, Thore; Mostad, Petter

    2012-01-01

    In a number of applications there is a need to determine the most likely pedigree for a group of persons based on genetic markers. Adequate models are needed to reach this goal. The markers used to perform the statistical calculations can be linked and there may also be linkage disequilibrium (LD) in the population. The purpose of this paper is to present a graphical Bayesian Network framework to deal with such data. Potential LD is normally ignored and it is important to verify that the resulting calculations are not biased. Even if linkage does not influence results for regular paternity cases, it may have substantial impact on likelihood ratios involving other, more extended pedigrees. Models for LD influence likelihoods for all pedigrees to some degree and an initial estimate of the impact of ignoring LD and/or linkage is desirable, going beyond mere rules of thumb based on marker distance. Furthermore, we show how one can readily include a mutation model in the Bayesian Network; extending other programs or formulas to include such models may require considerable amounts of work and will in many case not be practical. As an example, we consider the two STR markers vWa and D12S391. We estimate probabilities for population haplotypes to account for LD using a method based on data from trios, while an estimate for the degree of linkage is taken from the literature. The results show that accounting for haplotype frequencies is unnecessary in most cases for this specific pair of markers. When doing calculations on regular paternity cases, the markers can be considered statistically independent. In more complex cases of disputed relatedness, for instance cases involving siblings or so-called deficient cases, or when small differences in the LR matter, independence should not be assumed. (The networks are freely available at http://arken.umb.no/~dakl/BayesianNetworks.) PMID:22984448

  11. Control of Leakage Flow by Triple Squealer Configuration in Axial Flow Turbine

    NASA Astrophysics Data System (ADS)

    El-Ghandour, Mohamed; Ibrahim, Mohammed K.; Mori, Koichi; Nakamura, Yoshiaki

    A new turbine blade tip shape called triple squealer is proposed. This shape is based on the conventional double squealer, and the cavity on the tip surface is divided into two parts by using a third squealer along the blade camber line. The effect of the ratio of groove depth to span (GDS ratio) was investigated. The flat-tip case (baseline case) and double-squealer case were calculated for comparison. In-house, unstructured, 3D, Navier-Stokes, finite volume, multiblock code with DES (Detached Eddy Simulation) as turbulence model was used to calculate the flow field around the tip. The computational results show that the reduction in the mass flow rate of the leakage flow for the triple squealer is 15.69% compared to the flat-tip case.

  12. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  13. Gonorrhoea and Syphilis Epidemiology in Flemish General Practice 2009–2013: Results from a Registry-based Retrospective Cohort Study Compared with Mandatory Notification

    PubMed Central

    Schweikardt, Christoph; Goderis, Geert; Elli, Steven; Coppieters, Yves

    2016-01-01

    Background The number of newly diagnosed gonorrhoea and syphilis cases has increased in Flanders in recent years. Our aim was to investigate, to which extent these diagnoses were registered by general practitioners (GPs), and to examine opportunities and limits of the Intego database in this regard. Methods Data from a retrospective cohort study based on the Flemish Intego general practice database was analyzed for the years 2009–2013. Case definitions were applied. Due to small case numbers obtained, cases were pooled and averaged over the observation period. Frequencies were compared with those calculated from figures of mandatory notification. Results A total of 91 gonorrhoea and 23 syphilis cases were registered. The average Intego annual frequency of gonorrhoea cases obtained was 11.9 (95% Poisson confidence interval (CI) 9.6; 14.7) per 100,000 population, and for syphilis 3.0 (CI 1.9; 4.5), respectively, while mandatory notification was calculated at 14.0 (CI: 13.6, 14.4) and 7.0 (CI: 6.7, 7.3), respectively. Conclusion In spite of limitations such as small numbers and different case definitions, comparison with mandatory notification suggests that the GP was involved in the large majority of gonorrhoea cases, while the majority of new syphilis cases did not come to the knowledge of the GP. PMID:29546196

  14. Assessment of Some Atomization Models Used in Spray Calculations

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Bulzin, Dan

    2011-01-01

    The paper presents the results from a validation study undertaken as a part of the NASA s fundamental aeronautics initiative on high altitude emissions in order to assess the accuracy of several atomization models used in both non-superheat and superheat spray calculations. As a part of this investigation we have undertaken the validation based on four different cases to investigate the spray characteristics of (1) a flashing jet generated by the sudden release of pressurized R134A from cylindrical nozzle, (2) a liquid jet atomizing in a subsonic cross flow, (3) a Parker-Hannifin pressure swirl atomizer, and (4) a single-element Lean Direct Injector (LDI) combustor experiment. These cases were chosen because of their importance in some aerospace applications. The validation is based on some 3D and axisymmetric calculations involving both reacting and non-reacting sprays. In general, the predicted results provide reasonable agreement for both mean droplet sizes (D32) and average droplet velocities but mostly underestimate the droplets sizes in the inner radial region of a cylindrical jet.

  15. First-principles calculation of defect free energies: General aspects illustrated in the case of bcc Fe

    NASA Astrophysics Data System (ADS)

    Murali, D.; Posselt, M.; Schiwarth, M.

    2015-08-01

    Modeling of nanostructure evolution in solids requires comprehensive data on the properties of defects such as the vacancy and foreign atoms. Since most processes occur at elevated temperatures, not only the energetics of defects in the ground state, but also their temperature-dependent free energies must be known. The first-principles calculation of contributions of phonon and electron excitations to free formation, binding, and migration energies of defects is illustrated in the case of bcc Fe. First of all, the ground-state properties of the vacancy, the foreign atoms Cu, Y, Ti, Cr, Mn, Ni, V, Mo, Si, Al, Co, O, and the O-vacancy pair are determined under constant volume (CV) as well as zero-pressure (ZP) conditions, and relations between the results of both kinds of calculations are discussed. Second, the phonon contribution to defect free energies is calculated within the harmonic approximation using the equilibrium atomic positions determined in the ground state under CV and ZP conditions. In most cases, the ZP-based free formation energy decreases monotonously with temperature, whereas for CV-based data both an increase and a decrease were found. The application of a quasiharmonic correction to the ZP-based data does not modify this picture significantly. However, the corrected data are valid under zero-pressure conditions at higher temperatures than in the framework of the purely harmonic approach. The difference between CV- and ZP-based data is mainly due to the volume change of the supercell since the relative arrangement of atoms in the environment of the defects is nearly identical in the two cases. A simple transformation similar to the quasiharmonic approach is found between the CV- and ZP-based frequencies. Therefore, it is not necessary to calculate these quantities and the corresponding defect free energies separately. In contrast to ground-state energetics, the CV- and ZP-based defect free energies do not become equal with increasing supercell size. Third, it was found that the contribution of electron excitations to the defect free energy can lead to an additional deviation of the total free energy from the ground-state value or can compensate the deviation caused by the phonon contribution. Finally, self-diffusion via the vacancy mechanism is investigated. The ratio of the respective CV- and ZP-based results for the vacancy diffusivity is nearly equal to the reciprocal of that for the equilibrium concentration. This behavior leads to almost identical CV- and ZP-based values for the self-diffusion coefficient. Obviously, this agreement is accidental. The consideration of the temperature dependence of the magnetization yields self-diffusion data in very good agreement with experiments.

  16. Comparison of anatomy-based, fluence-based and aperture-based treatment planning approaches for VMAT

    NASA Astrophysics Data System (ADS)

    Rao, Min; Cao, Daliang; Chen, Fan; Ye, Jinsong; Mehta, Vivek; Wong, Tony; Shepard, David

    2010-11-01

    Volumetric modulated arc therapy (VMAT) has the potential to reduce treatment times while producing comparable or improved dose distributions relative to fixed-field intensity-modulated radiation therapy. In order to take full advantage of the VMAT delivery technique, one must select a robust inverse planning tool. The purpose of this study was to evaluate the effectiveness and efficiency of VMAT planning techniques of three categories: anatomy-based, fluence-based and aperture-based inverse planning. We have compared these techniques in terms of the plan quality, planning efficiency and delivery efficiency. Fourteen patients were selected for this study including six head-and-neck (HN) cases, and two cases each of prostate, pancreas, lung and partial brain. For each case, three VMAT plans were created. The first VMAT plan was generated based on the anatomical geometry. In the Elekta ERGO++ treatment planning system (TPS), segments were generated based on the beam's eye view (BEV) of the target and the organs at risk. The segment shapes were then exported to Pinnacle3 TPS followed by segment weight optimization and final dose calculation. The second VMAT plan was generated by converting optimized fluence maps (calculated by the Pinnacle3 TPS) into deliverable arcs using an in-house arc sequencer. The third VMAT plan was generated using the Pinnacle3 SmartArc IMRT module which is an aperture-based optimization method. All VMAT plans were delivered using an Elekta Synergy linear accelerator and the plan comparisons were made in terms of plan quality and delivery efficiency. The results show that for cases of little or modest complexity such as prostate, pancreas, lung and brain, the anatomy-based approach provides similar target coverage and critical structure sparing, but less conformal dose distributions as compared to the other two approaches. For more complex HN cases, the anatomy-based approach is not able to provide clinically acceptable VMAT plans while highly conformal dose distributions were obtained using both aperture-based and fluence-based inverse planning techniques. The aperture-based approach provides improved dose conformity than the fluence-based technique in complex cases.

  17. Clinical implementation of a GPU-based simplified Monte Carlo method for a treatment planning system of proton beam therapy.

    PubMed

    Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T

    2011-11-21

    We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.

  18. Validation of GPU based TomoTherapy dose calculation engine.

    PubMed

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) < 1. The worst case observed in the phantom had 0.22% voxels violating the criterion. In patient cases, the worst percentage of voxels violating the criterion was 0.57%. For absolute point dose verification, all cases agreed with measurement to within ±3% with average error magnitude within 1%. All cases passed the acceptance criterion that more than 95% of the pixels have Γ(3%, 3 mm) < 1 in film measurement, and the average passing pixel percentage is 98.5%-99%. The GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  19. The Confusion about CLV in Case-Based Teaching Materials

    ERIC Educational Resources Information Center

    Bendle, Neil T.; Bagga, Charan K.

    2017-01-01

    The authors review 33 cases and related materials to understand how customer lifetime value (CLV) is taught. The authors examine (a) whether CLV is calculated using something other than contribution (e.g., revenue), (b) whether discounting is used, and (c) whether acquisition costs are subtracted before reporting CLV. The authors show considerable…

  20. Can a semi-automated surface matching and principal axis-based algorithm accurately quantify femoral shaft fracture alignment in six degrees of freedom?

    PubMed

    Crookshank, Meghan C; Beek, Maarten; Singh, Devin; Schemitsch, Emil H; Whyne, Cari M

    2013-07-01

    Accurate alignment of femoral shaft fractures treated with intramedullary nailing remains a challenge for orthopaedic surgeons. The aim of this study is to develop and validate a cone-beam CT-based, semi-automated algorithm to quantify the malalignment in six degrees of freedom (6DOF) using a surface matching and principal axes-based approach. Complex comminuted diaphyseal fractures were created in nine cadaveric femora and cone-beam CT images were acquired (27 cases total). Scans were cropped and segmented using intensity-based thresholding, producing superior, inferior and comminution volumes. Cylinders were fit to estimate the long axes of the superior and inferior fragments. The angle and distance between the two cylindrical axes were calculated to determine flexion/extension and varus/valgus angulation and medial/lateral and anterior/posterior translations, respectively. Both surfaces were unwrapped about the cylindrical axes. Three methods of matching the unwrapped surface for determination of periaxial rotation were compared based on minimizing the distance between features. The calculated corrections were compared to the input malalignment conditions. All 6DOF were calculated to within current clinical tolerances for all but two cases. This algorithm yielded accurate quantification of malalignment of femoral shaft fractures for fracture gaps up to 60 mm, based on a single CBCT image of the fractured limb. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Symmetry analysis of trimers rovibrational spectra: the case of Ne3★

    NASA Astrophysics Data System (ADS)

    Márquez-Mijares, Maykel; Roncero, Octavio; Villarreal, Pablo; González-Lezana, Tomás

    2018-05-01

    An approximate method to assign the symmetry to the rovibrational spectrum of homonuclear trimers based on the solution of the rotational Hamiltonian by means of a purely vibrational basis combined with standard rotational functions is applied on Ne3. The neon trimer constitutes an ideal test between heavier systems such as Ar3 for which the method proves to be an extremely useful technique and some other previously investigated cases such as H3 + where some limitations were observed. Comparisons of the calculated rovibrational energy levels are established with results from different calculations reported in the literature.

  2. Burst strength of tubing and casing based on twin shear unified strength theory.

    PubMed

    Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish

    2014-01-01

    The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells.

  3. Burst Strength of Tubing and Casing Based on Twin Shear Unified Strength Theory

    PubMed Central

    Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish

    2014-01-01

    The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells. PMID:25397886

  4. Time Safety Margin: Theory and Practice

    DTIC Science & Technology

    2016-09-01

    Basic Dive Recovery Terminology The Simplest Definition of TSM: Time Safety Margin is the time to directly travel from the worst-case vector to an...Safety Margin (TSM). TSM is defined as the time in seconds to directly travel from the worst case vector (i.e. worst case combination of parameters...invoked by this AFI, base recovery planning and risk management upon the calculated TSM. TSM is the time in seconds to di- rectly travel from the worst case

  5. 77 FR 21529 - Freshwater Crawfish Tail Meat From the People's Republic of China: Final Results of Antidumping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... question, including when that rate is zero or de minimis.\\5\\ In this case, there is only one non-selected... calculations for one company. Therefore, the final results differ from the preliminary results. The final... not to calculate an all-others rate using any zero or de minimis margins or any margins based entirely...

  6. The calculation of molecular Eigen-frequencies

    NASA Technical Reports Server (NTRS)

    Lindemann, F. A.

    1984-01-01

    A method of determining molecular eigen-frequencies based on the function of Einstein expressing the variation of the atomic heat of various elements is proposed. It is shown that the same equation can be utilized to calculate both atomic heat and optically identifiably eigen-frequencies - at least to an order of magnitude - suggesting that in both cases the same oscillating structure is responsible.

  7. Direct Quantum Dynamics Using Grid-Based Wave Function Propagation and Machine-Learned Potential Energy Surfaces.

    PubMed

    Richings, Gareth W; Habershon, Scott

    2017-09-12

    We describe a method for performing nuclear quantum dynamics calculations using standard, grid-based algorithms, including the multiconfiguration time-dependent Hartree (MCTDH) method, where the potential energy surface (PES) is calculated "on-the-fly". The method of Gaussian process regression (GPR) is used to construct a global representation of the PES using values of the energy at points distributed in molecular configuration space during the course of the wavepacket propagation. We demonstrate this direct dynamics approach for both an analytical PES function describing 3-dimensional proton transfer dynamics in malonaldehyde and for 2- and 6-dimensional quantum dynamics simulations of proton transfer in salicylaldimine. In the case of salicylaldimine we also perform calculations in which the PES is constructed using Hartree-Fock calculations through an interface to an ab initio electronic structure code. In all cases, the results of the quantum dynamics simulations are in excellent agreement with previous simulations of both systems yet do not require prior fitting of a PES at any stage. Our approach (implemented in a development version of the Quantics package) opens a route to performing accurate quantum dynamics simulations via wave function propagation of many-dimensional molecular systems in a direct and efficient manner.

  8. TrackEtching - A Java based code for etched track profile calculations in SSNTDs

    NASA Astrophysics Data System (ADS)

    Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.

    2017-09-01

    A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.

  9. Inter-comparison of Dose Distributions Calculated by FLUKA, GEANT4, MCNP, and PHITS for Proton Therapy

    NASA Astrophysics Data System (ADS)

    Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun

    2017-09-01

    The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.

  10. SU-E-T-159: Evaluation of a Patient Specific QA Tool Based On TG119

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashmeg, S; Zhang, Y; O'Daniel, J

    2014-06-01

    Purpose: To evaluate the accuracy of a 3D patient specific QA tool by analysis of the results produced from associated software in homogenous phantom and heterogonous patient CT. Methods: IMRT and VMAT plans of five test suites introduced by TG119 were created in ECLIPSE on a solid water phantom. The ten plans -of increasing complexity- were delivered to Delta4 to give a 3D measurement. The Delta4's “Anatomy” software uses the measured dose to back-calculate the energy fluence of the delivered beams, which is used for dose calculation in a patient CT using a pencilbeam algorithm. The effect of the modulatedmore » beams' complexity on the accuracy of the “Anatomy” calculation was evaluated. Both measured and Anatomy doses were compared to ECLIPSE calculation using 3% - 3mm gamma criteria.We also tested the effect of heterogeneity by analyzing the results of “Anatomy” calculation on a Brain VMAT and a 3D conformal lung cases. Results: In homogenous phantom, the gamma passing rates were found to be as low as 74.75% for a complex plan with high modulation. The mean passing rates were 91.47% ± 6.35% for “Anatomy” calculation and 99.46% ± 0.62% for Delta4 measurements.As for the heterogeneous cases, the rates were 96.54%±3.67% and 83.87%±9.42% for Brain VMAT and 3D lung respectively. This increased error in the lung case could be due to the use of the pencil beam algorithm as opposed to the AAA used by ECLIPSE.Also, gamma analysis showed high discrepancy along the beam edge in the “Anatomy” calculated results. This suggests a poor beam modeling in the penumbra region. Conclusion: The results show various sources of errors in “Anatomy” calculations. These include beam modeling in the penumbra region, complexity of a modulated beam (shown in homogenous phantom and brain cases) and dose calculation algorithms (3D conformal lung case)« less

  11. Development of the Workplace Health Savings Calculator: a practical tool to measure economic impact from reduced absenteeism and staff turnover in workplace health promotion.

    PubMed

    Baxter, Siyan; Campbell, Sharon; Sanderson, Kristy; Cazaly, Carl; Venn, Alison; Owen, Carole; Palmer, Andrew J

    2015-09-18

    Workplace health promotion is focussed on improving the health and wellbeing of workers. Although quantifiable effectiveness and economic evidence is variable, workplace health promotion is recognised by both government and business stakeholders as potentially beneficial for worker health and economic advantage. Despite the current debate on whether conclusive positive outcomes exist, governments are investing, and business engagement is necessary for value to be realised. Practical tools are needed to assist decision makers in developing the business case for workplace health promotion programs. Our primary objective was to develop an evidence-based, simple and easy-to-use resource (calculator) for Australian employers interested in workplace health investment figures. Three phases were undertaken to develop the calculator. First, evidence from a literature review located appropriate effectiveness measures. Second, a review of employer-facilitated programs aimed at improving the health and wellbeing of employees was utilised to identify change estimates surrounding these measures, and third, currently available online evaluation tools and models were investigated. We present a simple web-based calculator for use by employers who wish to estimate potential annual savings associated with implementing a successful workplace health promotion program. The calculator uses effectiveness measures (absenteeism and staff turnover rates) and change estimates sourced from 55 case studies to generate the annual savings an employer may potentially gain. Australian wage statistics were used to calculate replacement costs due to staff turnover. The calculator was named the Workplace Health Savings Calculator and adapted and reproduced on the Healthy Workers web portal by the Australian Commonwealth Government Department of Health and Ageing. The Workplace Health Savings Calculator is a simple online business tool that aims to engage employers and to assist participation, development and implementation of workplace health promotion programs.

  12. Case-based fracture image retrieval.

    PubMed

    Zhou, Xin; Stern, Richard; Müller, Henning

    2012-05-01

    Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.

  13. Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.

    PubMed

    Gapsys, Vytautas; de Groot, Bert L

    2017-12-12

    Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .

  14. Molecular simulation of caloric properties of fluids modelled by force fields with intramolecular contributions: Application to heat capacities

    NASA Astrophysics Data System (ADS)

    Smith, William R.; Jirsák, Jan; Nezbeda, Ivo; Qi, Weikai

    2017-07-01

    The calculation of caloric properties such as heat capacity, Joule-Thomson coefficients, and the speed of sound by classical force-field-based molecular simulation methodology has received scant attention in the literature, particularly for systems composed of complex molecules whose force fields (FFs) are characterized by a combination of intramolecular and intermolecular terms. The calculation of a thermodynamic property for a system whose molecules are described by such a FF involves the calculation of the residual property prior to its addition to the corresponding ideal-gas property, the latter of which is separately calculated, either using thermochemical compilations or nowadays accurate quantum mechanical calculations. Although the simulation of a volumetric residual property proceeds by simply replacing the intermolecular FF in the rigid molecule case by the total (intramolecular plus intermolecular) FF, this is not the case for a caloric property. We describe the correct methodology required to perform such calculations and illustrate it in this paper for the case of the internal energy and the enthalpy and their corresponding molar heat capacities. We provide numerical results for cP, one of the most important caloric properties. We also consider approximations to the correct calculation procedure previously used in the literature and illustrate their consequences for the examples of the relatively simple molecule 2-propanol, CH3CH(OH)CH3, and for the more complex molecule monoethanolamine, HO(CH2)2NH2, an important fluid used in carbon capture.

  15. Facilitating the selection and creation of accurate interatomic potentials with robust tools and characterization

    NASA Astrophysics Data System (ADS)

    Trautt, Zachary T.; Tavazza, Francesca; Becker, Chandler A.

    2015-10-01

    The Materials Genome Initiative seeks to significantly decrease the cost and time of development and integration of new materials. Within the domain of atomistic simulations, several roadblocks stand in the way of reaching this goal. While the NIST Interatomic Potentials Repository hosts numerous interatomic potentials (force fields), researchers cannot immediately determine the best choice(s) for their use case. Researchers developing new potentials, specifically those in restricted environments, lack a comprehensive portfolio of efficient tools capable of calculating and archiving the properties of their potentials. This paper elucidates one solution to these problems, which uses Python-based scripts that are suitable for rapid property evaluation and human knowledge transfer. Calculation results are visible on the repository website, which reduces the time required to select an interatomic potential for a specific use case. Furthermore, property evaluation scripts are being integrated with modern platforms to improve discoverability and access of materials property data. To demonstrate these scripts and features, we will discuss the automation of stacking fault energy calculations and their application to additional elements. While the calculation methodology was developed previously, we are using it here as a case study in simulation automation and property calculations. We demonstrate how the use of Python scripts allows for rapid calculation in a more easily managed way where the calculations can be modified, and the results presented in user-friendly and concise ways. Additionally, the methods can be incorporated into other efforts, such as openKIM.

  16. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less

  17. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less

  18. Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method

    NASA Astrophysics Data System (ADS)

    Yuanyue, Yang; Huimin, Li

    2018-02-01

    Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.

  19. Do prices reflect the costs of cardiac surgery in the elderly?

    PubMed

    Coelho, Pedro; Rodrigues, Vanessa; Miranda, Luís; Fragata, José; Pita Barros, Pedro

    2017-01-01

    Payment for cardiac surgery in Portugal is based on a contract agreement between hospitals and the health ministry. Our aim was to compare the prices paid according to this contract agreement with calculated costs in a population of patients aged ≥65 years undergoing cardiac surgery in one hospital department. Data on 250 patients operated between September 2011 and September 2012 were prospectively collected. The procedures studied were coronary artery bypass graft surgery (CABG) (n=67), valve surgery (n=156) and combined CABG and valve surgery (n=27). Costs were calculated by two methods: micro-costing when feasible and mean length of stay otherwise. Price information was provided by the hospital administration and calculated using the hospital's mean case-mix. Thirty-day mortality was 3.2%. Mean EuroSCORE I was 5.97 (standard deviation [SD] 4.5%), significantly lower for CABG (p<0.01). Mean intensive care unit stay was 3.27 days (SD 4.7) and mean hospital stay was 9.92 days (SD 6.30), both significantly shorter for CABG. Calculated costs for CABG were €6539.17 (SD 3990.26), for valve surgery €8289.72 (SD 3319.93) and for combined CABG and valve surgery €11 498.24 (SD 10 470.57). The payment for each patient was €4732.38 in 2011 and €4678.66 in 2012 based on the case-mix index of the hospital group, which was 2.06 in 2011 and 2.21 in 2012; however, the case-mix in our sample was 6.48 in 2011 and 6.26 in 2012. The price paid for each patient was lower than the calculated costs. Prices would be higher than costs if the case-mix of the sample had been used. Costs were significantly lower for CABG. Copyright © 2016 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Reported Neuroinvasive Cases of West Nile Virus by State, 2002-2014

    EPA Pesticide Factsheets

    This map shows the average annual incidence of neuroinvasive West Nile virus disease in each state, which is calculated as the average number of new cases per 100,000 people per year from 2002 to 2014. The map is based on cases that local and state health departments report to CDC??s national disease tracking system. Neuroinvasive cases are those that affect the brain or cause neurologic dysfunction. For more information: www.epa.gov/climatechange/science/indicators

  1. Gradient-based multiconfiguration Shepard interpolation for generating potential energy surfaces for polyatomic reactions.

    PubMed

    Tishchenko, Oksana; Truhlar, Donald G

    2010-02-28

    This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.

  2. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  3. Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, C.E.

    Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.

  4. 45 CFR 261.43 - What is the definition of a “case receiving assistance” in calculating the caseload reduction...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... credit is based on decreases in caseloads receiving TANF- or SSP-MOE-funded assistance (other than those... TANF and SSP-MOE assistance expenditures (both Federal and State) divided by the average monthly sum of TANF and SSP-MOE caseloads for the fiscal year. (iii) If the excess MOE calculation is for a separate...

  5. Distorted-wave born approximation calculations for turbulence scattering in an upward-refracting atmosphere

    NASA Technical Reports Server (NTRS)

    Gilbert, Kenneth E.; Di, Xiao; Wang, Lintao

    1990-01-01

    Weiner and Keast observed that in an upward-refracting atmosphere, the relative sound pressure level versus range follows a characteristic 'step' function. The observed step function has recently been predicted qualitatively and quantitatively by including the effects of small-scale turbulence in a parabolic equation (PE) calculation. (Gilbert et al., J. Acoust. Soc. Am. 87, 2428-2437 (1990)). The PE results to single-scattering calculations based on the distorted-wave Born approximation (DWBA) are compared. The purpose is to obtain a better understanding of the physical mechanisms that produce the step-function. The PE calculations and DWBA calculations are compared to each other and to the data of Weiner and Keast for upwind propagation (strong upward refraction) and crosswind propagation (weak upward refraction) at frequencies of 424 Hz and 848 Hz. The DWBA calculations, which include only single scattering from turbulence, agree with the PE calculations and with the data in all cases except for upwind propagation at 848 Hz. Consequently, it appears that in all cases except one, the observed step function can be understood in terms of single scattering from an upward-refracted 'skywave' into the refractive shadow zone. For upwind propagation at 848 Hz, the DWBA calculation gives levels in the shadow zone that are much below both the PE and the data.

  6. Population-based Incidence of Pulmonary Nontuberculous Mycobacterial Disease in Oregon 2007 to 2012.

    PubMed

    Henkle, Emily; Hedberg, Katrina; Schafer, Sean; Novosad, Shannon; Winthrop, Kevin L

    2015-05-01

    Pulmonary nontuberculous mycobacteria (NTM) disease is a chronic, nonreportable illness, making it difficult to monitor. Although recent studies suggest an increasing prevalence of NTM disease in the United States, the incidence and temporal trends are unknown. To describe incident cases and calculate the incidence and temporal trends of pulmonary NTM disease in Oregon. We contacted all laboratories performing mycobacterial cultures on Oregon residents and collected demographic and specimen information for patients with NTM isolated during 2007 to 2012. We defined a case of pulmonary NTM disease using the 2007 American Thoracic Society/Infectious Disease Society of America microbiologic criteria. We used similar state-wide mycobacterial laboratory data from 2005 to 2006 to exclude prevalent cases from our calculations. We calculated annual pulmonary NTM disease incidence within Oregon during 2007 to 2012, described cases demographically and microbiologically, and evaluated incidence trends over time using a Poisson model. We identified 1,146 incident pulmonary NTM cases in Oregon residents from 2007 to 2012. The median age was 69 years (range, 0.9-97 yr). Cases were more likely female (56%), but among patients less than 60 years old, disease was more common in male subjects (54%). Most (86%) were Mycobacterium avium/intracellulare cases; 68 (6%) were Mycobacterium abscessus/chelonae cases. Although not statistically significant, incidence increased from 4.8/100,000 in 2007 to 5.6/100,000 in 2012 (P for trend, 0.21). Incidence increased with age, to more than 25/100,000 in patients 80 years of age or older. This is the first population-based estimate of pulmonary NTM disease incidence in a region within the United States. In Oregon, disease incidence rose slightly during 2007 to 2012, and although more common in female individuals overall, disease was more common among male individuals less than 60 years of age.

  7. Population-based Incidence of Pulmonary Nontuberculous Mycobacterial Disease in Oregon 2007 to 2012

    PubMed Central

    Hedberg, Katrina; Schafer, Sean; Novosad, Shannon; Winthrop, Kevin L.

    2015-01-01

    Rationale: Pulmonary nontuberculous mycobacteria (NTM) disease is a chronic, nonreportable illness, making it difficult to monitor. Although recent studies suggest an increasing prevalence of NTM disease in the United States, the incidence and temporal trends are unknown. Objectives: To describe incident cases and calculate the incidence and temporal trends of pulmonary NTM disease in Oregon. Methods: We contacted all laboratories performing mycobacterial cultures on Oregon residents and collected demographic and specimen information for patients with NTM isolated during 2007 to 2012. We defined a case of pulmonary NTM disease using the 2007 American Thoracic Society/Infectious Disease Society of America microbiologic criteria. We used similar state-wide mycobacterial laboratory data from 2005 to 2006 to exclude prevalent cases from our calculations. We calculated annual pulmonary NTM disease incidence within Oregon during 2007 to 2012, described cases demographically and microbiologically, and evaluated incidence trends over time using a Poisson model. Measurements and Main Results: We identified 1,146 incident pulmonary NTM cases in Oregon residents from 2007 to 2012. The median age was 69 years (range, 0.9–97 yr). Cases were more likely female (56%), but among patients less than 60 years old, disease was more common in male subjects (54%). Most (86%) were Mycobacterium avium/intracellulare cases; 68 (6%) were Mycobacterium abscessus/chelonae cases. Although not statistically significant, incidence increased from 4.8/100,000 in 2007 to 5.6/100,000 in 2012 (P for trend, 0.21). Incidence increased with age, to more than 25/100,000 in patients 80 years of age or older. Conclusions: This is the first population-based estimate of pulmonary NTM disease incidence in a region within the United States. In Oregon, disease incidence rose slightly during 2007 to 2012, and although more common in female individuals overall, disease was more common among male individuals less than 60 years of age. PMID:25692495

  8. A SQL-Database Based Meta-CASE System and its Query Subsystem

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki; Sgirka, Rünno

    Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.

  9. A quantum mechanical strategy to investigate the structure of liquids: the cases of acetonitrile, formamide, and their mixture.

    PubMed

    Mennucci, Benedetta; da Silva, Clarissa O

    2008-06-05

    A computational strategy based on quantum mechanical (QM) calculations and continuum solvation models is used to investigate the structure of liquids (either neat liquids or mixtures). The strategy is based on the comparison of calculated and experimental spectroscopic properties (IR-Raman vibrational frequencies and Raman intensities). In particular, neat formamide, neat acetonitrile, and their equimolar mixture are studied comparing isolated and solvated clusters of different nature and size. In all cases, the study seems to indicate that liquids, even when strongly associated, can be effectively modeled in terms of a shell-like system in which clusters of strongly interacting molecules (the microenvironments) are solvated by a polarizable macroenvironment represented by the rest of the molecules. Only taking into proper account both these effects can a correct picture of the liquid structure be achieved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S; Guerrero, M; Zhang, B

    Purpose: To implement a comprehensive non-measurement-based verification program for patient-specific IMRT QA Methods: Based on published guidelines, a robust IMRT QA program should assess the following components: 1) accuracy of dose calculation, 2) accuracy of data transfer from the treatment planning system (TPS) to the record-and-verify (RV) system, 3) treatment plan deliverability, and 4) accuracy of plan delivery. Results: We have implemented an IMRT QA program that consist of four components: 1) an independent re-calculation of the dose distribution in the patient anatomy with a commercial secondary dose calculation program: Mobius3D (Mobius Medical Systems, Houston, TX), with dose accuracy evaluationmore » using gamma analysis, PTV mean dose, PTV coverage to 95%, and organ-at-risk mean dose; 2) an automated in-house-developed plan comparison system that compares all relevant plan parameters such as MU, MLC position, beam iso-center position, collimator, gantry, couch, field size settings, and bolus placement, etc. between the plan and the RV system; 3) use of the RV system to check the plan deliverability and further confirm using “mode-up” function on treatment console for plans receiving warning; and 4) implementation of a comprehensive weekly MLC QA, in addition to routine accelerator monthly and daily QA. Among 1200 verifications, there were 9 cases of suspicious calculations, 5 cases of delivery failure, no data transfer errors, and no failure of weekly MLC QA. These 9 suspicious cases were due to the PTV extending to the skin or to heterogeneity correction effects, which would not have been caught using phantom measurement-based QA. The delivery failure was due to the rounding variation of MLC position between the planning system and RV system. Conclusion: A very efficient, yet comprehensive, non-measurement-based patient-specific QA program has been implemented and used clinically for about 18 months with excellent results.« less

  11. Two brothers' alleged paternity for a child: who is the father?

    PubMed

    Dogan, Muhammed; Kara, Umut; Emre, Ramazan; Fung, Wing Kam; Canturk, Kemal Murat

    2015-06-01

    In paternity cases where individuals are close relatives, it may be necessary to evaluate mother's DNA profile (trio test) and to increase the number of polymorphic STR loci that are analyzed. In our case, two alleged fathers who are brothers and the child (duo case) were analyzed based on 20 STR loci; however, no exclusions could be achieved. Then trio test (with mother) was performed using the Identifiler Plus kit (Applied Biosystems) and no exclusions could be achieved again. Analysis performed with the ESS Plex Plus kit (Qiagen), the paternity of one of the two alleged fathers was rejected only on 2 STR loci. We made the calculations of power of exclusion values to interpret our results more properly. The probability of exclusion (PE) is calculated as 0.9776546 in 15 loci of Identifiler Plus kit without mother. The PE is calculated as 0.9942803, if 5 additional loci from ESS Plex Plus kit are typed. The PE becomes 0.9961048 for the Identifiler Plus kit in trio analysis. If both Identifiler Plus and ESS Plex Plus kits are used for testing, the PE is calculated as 0.999431654, which indicates that the combined kits are highly discriminating.

  12. Calculation of two dimensional vortex/surface interference using panel methods

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1980-01-01

    The application of panel methods to the calculation of vortex/surface interference characteristics in two dimensional flow was studied over a range of situations starting with the simple case of a vortex above a plane and proceeding to the case of vortex separation from a prescribed point on a thick section. Low order and high order panel methods were examined, but the main factor influencing the accuracy of the solution was the distance between control stations in relation to the height of the vortex above the surface. Improvements over the basic solutions were demonstrated using a technique based on subpanels and an applied doublet distribution.

  13. Risk factors for cardiomyopathy syndrome (CMS) in Norwegian salmon farming.

    PubMed

    Bang Jensen, Britt; Brun, Edgar; Fineid, Birgitte; Larssen, Rolf Bjerke; Kristoffersen, Anja B

    2013-12-12

    Cardiomyopathy syndrome (CMS) has been an economically important disease in Norwegian aquaculture since the 1990s. In this study, data on monthly production characteristics and case registrations were combined in a cohort study and supplemented with a questionnaire-based case-control survey on management factors in order to identify risk factors for CMS. The cohort study included cases and controls from 2005 to 2012. From this dataset differences between all cases and controls were analyzed by a mixed effect multivariate logistic regression. From this we found that the probability of CMS increased with increasing time in the sea, infection pressure, and cohort size, and that cohorts which had previously been diagnosed with heart and skeletal muscle inflammation or which were in farms with a history of CMS in previous cohorts had double the odds of developing CMS. The model was then used to calculate the predicted value for each cohort from which additional data were obtained via the questionnaire-based survey and used as offset for calculating the probability of CMS in a semi-univariate analysis of additional risk factors. Finally, the model was used to calculate the probability of developing CMS in 100 different scenarios in which the cohorts were subject to increasingly worse conditions with regards to the risk factors from the dataset. We believe that this exercise is a good way of communicating the findings to farmers, so they can make informed decisions when trying to avoid CMS in their fish cohorts.

  14. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  15. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  16. Cost of unreliability method to estimate loss of revenue based on unreliability data: Case study of Printing Company

    NASA Astrophysics Data System (ADS)

    alhilman, Judi

    2017-12-01

    In the production line process of the printing office, the reliability of the printing machine plays a very important role, if the machine fail it can disrupt production target so that the company will suffer huge financial loss. One method to calculate the financial loss cause by machine failure is use the Cost of Unreliability(COUR) method. COUR method works based on down time machine and costs associated with unreliability data. Based on the calculation of COUR method, so the sum of cost due to unreliability printing machine during active repair time and downtime is 1003,747.00.

  17. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  18. M-Health for Improving Screening Accuracy of Acute Malnutrition in a Community-Based Management of Acute Malnutrition Program in Mumbai Informal Settlements.

    PubMed

    Chanani, Sheila; Wacksman, Jeremy; Deshmukh, Devika; Pantvaidya, Shanti; Fernandez, Armida; Jayaraman, Anuja

    2016-12-01

    Acute malnutrition is linked to child mortality and morbidity. Community-Based Management of Acute Malnutrition (CMAM) programs can be instrumental in large-scale detection and treatment of undernutrition. The World Health Organization (WHO) 2006 weight-for-height/length tables are diagnostic tools available to screen for acute malnutrition. Frontline workers (FWs) in a CMAM program in Dharavi, Mumbai, were using CommCare, a mobile application, for monitoring and case management of children in combination with the paper-based WHO simplified tables. A strategy was undertaken to digitize the WHO tables into the CommCare application. To measure differences in diagnostic accuracy in community-based screening for acute malnutrition, by FWs, using a mobile-based solution. Twenty-seven FWs initially used the paper-based tables and then switched to an updated mobile application that included a nutritional grade calculator. Human error rates specifically associated with grade classification were calculated by comparison of the grade assigned by the FW to the grade each child should have received based on the same WHO tables. Cohen kappa coefficient, sensitivity and specificity rates were also calculated and compared for paper-based grade assignments and calculator grade assignments. Comparing FWs (N = 14) who completed at least 40 screenings without and 40 with the calculator, the error rates were 5.5% and 0.7%, respectively (p < .0001). Interrater reliability (κ) increased to an almost perfect level (>.90), from .79 to .97, after switching to the mobile calculator. Sensitivity and specificity also improved significantly. The mobile calculator significantly reduces an important component of human error in using the WHO tables to assess acute malnutrition at the community level. © The Author(s) 2016.

  19. Invited commentary: Evaluating epidemiologic research methods--the importance of response rate calculation.

    PubMed

    Harris, M Anne

    2010-09-15

    Epidemiologic research that uses administrative records (rather than registries or clinical surveys) to identify cases for study has been increasingly restricted because of concerns about privacy, making unbiased population-based research less practicable. In their article, Nattinger et al. (Am J Epidemiol. 2010;172(6):637-644) present a method for using administrative data to contact participants that has been well received. However, the methods employed for calculating and reporting response rates require further consideration, particularly the classification of untraceable cases as ineligible. Depending on whether response rates are used to evaluate the potential for bias to influence study results or to evaluate the acceptability of the method of contact, different fractions may be considered. To improve the future study of epidemiologic research methods, a consensus on the calculation and reporting of study response rates should be sought.

  20. Risk calculation variability over time in ocular hypertensive subjects.

    PubMed

    Song, Christian; De Moraes, Carlos Gustavo; Forchheimer, Ilana; Prata, Tiago S; Ritch, Robert; Liebmann, Jeffrey M

    2014-01-01

    To investigate the longitudinal variability of glaucoma risk calculation in ocular hypertensive (OHT) subjects. We reviewed the charts of untreated OHT patients followed in a glaucoma referral practice for a minimum of 60 months. Clinical variables collected at baseline and during follow-up included age, central corneal thickness (CCT), intraocular pressure (IOP), vertical cup-to-disc ratio (VCDR), and visual field pattern standard deviation (VFPSD). These were used to calculate the 5-year risk of conversion to primary open-angle glaucoma (POAG) at each follow-up visit using the Ocular Hypertension Treatment Study and European Glaucoma Prevention Study calculator (http://ohts.wustl.edu/risk/calculator.html). We also calculated the risk of POAG conversion based on the fluctuation of measured variables over time assuming the worst case scenarios (final age, highest PSD, lowest CCT, highest IOP, and highest VCDR) and best case scenarios (baseline age, lowest PSD, highest CCT, lowest IOP, and lowest VCDR) for each patient. Risk probabilities (%) were plotted against follow-up time to generate slopes of risk change over time. We included 27 untreated OHT patients (54 eyes) followed for a mean of 98.3±18.5 months. Seven individuals (25.9%) converted to POAG during follow-up. The mean 5-year risk of conversion for all patients in the study group ranged from 2.9% to 52.3% during follow-up. The mean slope of risk change over time was 0.37±0.81% increase/y. The mean slope for patients who reached a POAG endpoint was significantly greater than for those who did not (1.3±0.78 vs. 0.042±0.52%/y, P<0.01). In each patient, the mean risk of POAG conversion increased almost 10-fold when comparing the best case scenario with the worst case scenario (5.0% vs. 45.7%, P<0.01). The estimated 5-year risk of conversion to POAG among untreated OHT patients varies significantly during follow-up, with a trend toward increasing over time. Within the same individual, the estimated risk can vary almost 10-fold based on the variability of IOP, CCT, VCDR, and VFPSD. Therefore, a single risk calculation measurement may not be sufficient for accurate risk assessment, informed decision-making by patients, and physician treatment recommendations.

  1. Task 7: Endwall treatment inlet flow distortion analysis

    NASA Technical Reports Server (NTRS)

    Hall, E. J.; Topp, D. A.; Heidegger, N. J.; McNulty, G. S.; Weber, K. F.; Delaney, R. A.

    1996-01-01

    The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields, and to perform a series of detailed numerical predictions to assess the effectiveness of various endwall treatments for enhancing the efficiency and stall margin of modern high speed fan rotors. Particular attention was given to examining the effectiveness of endwall treatments to counter the undesirable effects of inflow distortion. Calculations were performed using three different gridding techniques based on the type of casing treatment being tested and the level of complexity desired in the analysis. In each case, the casing treatment itself is modeled as a discrete object in the overall analysis, and the flow through the casing treatment is determined as part of the solution. A series of calculations were performed for both treated and untreated modern fan rotors both with and without inflow distortion. The effectiveness of the various treatments were quantified, and several physical mechanisms by which the effectiveness of endwall treatments is achieved are discussed.

  2. Understanding the photoluminescence characteristics of Eu{sup 3+}-doped double-perovskite by electronic structure calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Binita; Halder, Saswata; Sinha, T. P.

    2016-05-23

    Europium-doped luminescent barium samarium tantalum oxide Ba{sub 2}SmTaO{sub 6} (BST) has been investigated by first-principles calculation, and the crystal structure, electronic structure, and optical properties of pure BST and Eu-doped BST have been examined and compared. Based on the calculated results, the luminescence properties and mechanism of Eu-doped BST has been discussed. In the case of Eu-doped BST, there is an impurity energy band at the Fermi level, which is formed by seven spin up energy levels of Eu and act as the luminescent centre, which is evident from the band structure calculations.

  3. Consistent criticality and radiation studies of Swiss spent nuclear fuel: The CS2M approach.

    PubMed

    Rochman, D; Vasiliev, A; Ferroukhi, H; Pecchia, M

    2018-06-15

    In this paper, a new method is proposed to systematically calculate at the same time canister loading curves and radiation sources, based on the inventory information from an in-core fuel management system. As a demonstration, the isotopic contents of the assemblies come from a Swiss PWR, considering more than 6000 cases from 34 reactor cycles. The CS 2 M approach consists in combining four codes: CASMO and SIMULATE to extract the assembly characteristics (based on validated models), the SNF code for source emission and MCNP for criticality calculations for specific canister loadings. The considered cases cover enrichments from 1.9 to 5.0% for the UO 2 assemblies and 4.8% for the MOX, with assembly burnup values from 7 to 74 MWd/kgU. Because such a study is based on the individual fuel assembly history, it opens the possibility to optimize canister loadings from the point-of-view of criticality, decay heat and emission sources. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift

    NASA Astrophysics Data System (ADS)

    Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.

    2012-01-01

    This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.

  5. Radial secondary electron dose profiles and biological effects in light-ion beams based on analytical and Monte Carlo calculations using distorted wave cross sections.

    PubMed

    Wiklund, Kristin; Olivera, Gustavo H; Brahme, Anders; Lind, Bengt K

    2008-07-01

    To speed up dose calculation, an analytical pencil-beam method has been developed to calculate the mean radial dose distributions due to secondary electrons that are set in motion by light ions in water. For comparison, radial dose profiles calculated using a Monte Carlo technique have also been determined. An accurate comparison of the resulting radial dose profiles of the Bragg peak for (1)H(+), (4)He(2+) and (6)Li(3+) ions has been performed. The double differential cross sections for secondary electron production were calculated using the continuous distorted wave-eikonal initial state method (CDW-EIS). For the secondary electrons that are generated, the radial dose distribution for the analytical case is based on the generalized Gaussian pencil-beam method and the central axis depth-dose distributions are calculated using the Monte Carlo code PENELOPE. In the Monte Carlo case, the PENELOPE code was used to calculate the whole radial dose profile based on CDW data. The present pencil-beam and Monte Carlo calculations agree well at all radii. A radial dose profile that is shallower at small radii and steeper at large radii than the conventional 1/r(2) is clearly seen with both the Monte Carlo and pencil-beam methods. As expected, since the projectile velocities are the same, the dose profiles of Bragg-peak ions of 0.5 MeV (1)H(+), 2 MeV (4)He(2+) and 3 MeV (6)Li(3+) are almost the same, with about 30% more delta electrons in the sub keV range from (4)He(2+)and (6)Li(3+) compared to (1)H(+). A similar behavior is also seen for 1 MeV (1)H(+), 4 MeV (4)He(2+) and 6 MeV (6)Li(3+), all classically expected to have the same secondary electron cross sections. The results are promising and indicate a fast and accurate way of calculating the mean radial dose profile.

  6. Investigation of the Fe{sup 3+} centers in perovskite KMgF{sub 3} through a combination of ab initio (density functional theory) and semi-empirical (superposition model) calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emül, Y.; Department of Software Engineering, Cumhuriyet University, 58140 Sivas; Erbahar, D.

    2015-08-14

    Analyses of the local crystal and electronic structure in the vicinity of Fe{sup 3+} centers in perovskite KMgF{sub 3} crystal have been carried out in a comprehensive manner. A combination of density functional theory (DFT) and a semi-empirical superposition model (SPM) is used for a complete analysis of all Fe{sup 3+} centers in this study for the first time. Some quantitative information has been derived from the DFT calculations on both the electronic structure and the local geometry around Fe{sup 3+} centers. All of the trigonal (K-vacancy case, K-Li substitution case, and normal trigonal Fe{sup 3+} center case), FeF{sub 5}Omore » cluster, and tetragonal (Mg-vacancy and Mg-Li substitution cases) centers have been taken into account based on the previously suggested experimental and theoretical inferences. The collaboration between the experimental data and the results of both DFT and SPM calculations provides us to understand most probable structural model for Fe{sup 3+} centers in KMgF{sub 3}.« less

  7. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  8. An Improved Method of Pose Estimation for Lighthouse Base Station Extension.

    PubMed

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-10-22

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

  9. An Improved Method of Pose Estimation for Lighthouse Base Station Extension

    PubMed Central

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-01-01

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal. PMID:29065509

  10. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy

    NASA Astrophysics Data System (ADS)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-01

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  11. Single-Case Experimental Designs to Evaluate Novel Technology-Based Health Interventions

    PubMed Central

    Cassidy, Rachel N; Raiff, Bethany R

    2013-01-01

    Technology-based interventions to promote health are expanding rapidly. Assessing the preliminary efficacy of these interventions can be achieved by employing single-case experiments (sometimes referred to as n-of-1 studies). Although single-case experiments are often misunderstood, they offer excellent solutions to address the challenges associated with testing new technology-based interventions. This paper provides an introduction to single-case techniques and highlights advances in developing and evaluating single-case experiments, which help ensure that treatment outcomes are reliable, replicable, and generalizable. These advances include quality control standards, heuristics to guide visual analysis of time-series data, effect size calculations, and statistical analyses. They also include experimental designs to isolate the active elements in a treatment package and to assess the mechanisms of behavior change. The paper concludes with a discussion of issues related to the generality of findings derived from single-case research and how generality can be established through replication and through analysis of behavioral mechanisms. PMID:23399668

  12. Achieving Accreditation Council for Graduate Medical Education duty hours compliance within advanced surgical training: a simulation-based feasibility assessment.

    PubMed

    Obi, Andrea; Chung, Jennifer; Chen, Ryan; Lin, Wandi; Sun, Siyuan; Pozehl, William; Cohn, Amy M; Daskin, Mark S; Seagull, F Jacob; Reddy, Rishindra M

    2015-11-01

    Certain operative cases occur unpredictably and/or have long operative times, creating a conflict between Accreditation Council for Graduate Medical Education (ACGME) rules and adequate training experience. A ProModel-based simulation was developed based on historical data. Probabilistic distributions of operative time calculated and combined with an ACGME compliant call schedule. For the advanced surgical cases modeled (cardiothoracic transplants), 80-hour violations were 6.07% and the minimum number of days off was violated 22.50%. There was a 36% chance of failure to fulfill any (either heart or lung) minimum case requirement despite adequate volume. The variable nature of emergency cases inevitably leads to work hour violations under ACGME regulations. Unpredictable cases mandate higher operative volume to ensure achievement of adequate caseloads. Publically available simulation technology provides a valuable avenue to identify adequacy of case volumes for trainees in both the elective and emergency setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  14. A Biomechanical Model for Lung Fibrosis in Proton Beam Therapy

    NASA Astrophysics Data System (ADS)

    King, David J. S.

    The physics of protons makes them well-suited to conformal radiotherapy due to the well-known Bragg peak effect. From a proton's inherent stopping power, uncertainty effects can cause a small amount of dose to overflow to an organ at risk (OAR). Previous models for calculating normal tissue complication probabilities (NTCPs) relied on the equivalent uniform dose model (EUD), in which the organ was split into 1/3, 2/3 or whole organ irradiation. However, the problem of dealing with volumes <1/3 of the total volume renders this EUD based approach no longer applicable. In this work the case for an experimental data-based replacement at low volumes is investigated. Lung fibrosis is investigated as an NTCP effect typically arising from dose overflow from tumour irradiation at the spinal base. Considering a 3D geometrical model of the lungs, irradiations are modelled with variable parameters of dose overflow. To calculate NTCPs without the EUD model, experimental data is used from the quantitative analysis of normal tissue effects in the clinic (QUANTEC) data. Additional side projects are also investigated, introduced and explained at various points. A typical radiotherapy course for the patient of 30x2Gy per fraction is simulated. A range of geometry of the target volume and irradiation types is investigated. Investigations with X-rays found the majority of the data point ratios (ratio of EUD values found from calculation based and data based methods) at 20% within unity showing a relatively close agreement. The ratios have not systematically preferred one particular type of predictive method. No Vx metric was found to consistently outperform another. In certain cases there is a good agreement and not in other cases which can be found predicted in the literature. The overall results leads to conclusion that there is no reason to discount the use of the data based predictive method particularly, as a low volume replacement predictive method.

  15. Estimating Leptospirosis Incidence Using Hospital-Based Surveillance and a Population-Based Health Care Utilization Survey in Tanzania

    PubMed Central

    Biggs, Holly M.; Hertz, Julian T.; Munishi, O. Michael; Galloway, Renee L.; Marks, Florian; Saganda, Wilbrod; Maro, Venance P.; Crump, John A.

    2013-01-01

    Background The incidence of leptospirosis, a neglected zoonotic disease, is uncertain in Tanzania and much of sub-Saharan Africa, resulting in scarce data on which to prioritize resources for public health interventions and disease control. In this study, we estimate the incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania. Methodology/Principal Findings We conducted a population-based household health care utilization survey in two districts in the Kilimanjaro Region of Tanzania and identified leptospirosis cases at two hospital-based fever sentinel surveillance sites in the Kilimanjaro Region. We used multipliers derived from the health care utilization survey and case numbers from hospital-based surveillance to calculate the incidence of leptospirosis. A total of 810 households were enrolled in the health care utilization survey and multipliers were derived based on responses to questions about health care seeking in the event of febrile illness. Of patients enrolled in fever surveillance over a 1 year period and residing in the 2 districts, 42 (7.14%) of 588 met the case definition for confirmed or probable leptospirosis. After applying multipliers to account for hospital selection, test sensitivity, and study enrollment, we estimated the overall incidence of leptospirosis ranges from 75–102 cases per 100,000 persons annually. Conclusions/Significance We calculated a high incidence of leptospirosis in two districts in the Kilimanjaro Region of Tanzania, where leptospirosis incidence was previously unknown. Multiplier methods, such as used in this study, may be a feasible method of improving availability of incidence estimates for neglected diseases, such as leptospirosis, in resource constrained settings. PMID:24340122

  16. Plane-Wave Implementation and Performance of à-la-Carte Coulomb-Attenuated Exchange-Correlation Functionals for Predicting Optical Excitation Energies in Some Notorious Cases.

    PubMed

    Bircher, Martin P; Rothlisberger, Ursula

    2018-06-12

    Linear-response time-dependent density functional theory (LR-TD-DFT) has become a valuable tool in the calculation of excited states of molecules of various sizes. However, standard generalized-gradient approximation and hybrid exchange-correlation (xc) functionals often fail to correctly predict charge-transfer (CT) excitations with low orbital overlap, thus limiting the scope of the method. The Coulomb-attenuation method (CAM) in the form of the CAM-B3LYP functional has been shown to reliably remedy this problem in many CT systems, making accurate predictions possible. However, in spite of a rather consistent performance across different orbital overlap regimes, some pitfalls remain. Here, we present a fully flexible and adaptable implementation of the CAM for Γ-point calculations within the plane-wave pseudopotential molecular dynamics package CPMD and explore how customized xc functionals can improve the optical spectra of some notorious cases. We find that results obtained using plane waves agree well with those from all-electron calculations employing atom-centered bases, and that it is possible to construct a new Coulomb-attenuated xc functional based on simple considerations. We show that such a functional is able to outperform CAM-B3LYP in some cases, while retaining similar accuracy in systems where CAM-B3LYP performs well.

  17. Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Jung, Yeon Sang; Liu, Yuxuan

    A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shieldedmore » cross sections.« less

  18. Studies on time of death estimation in the early post mortem period -- application of a method based on eyeball temperature measurement to human bodies.

    PubMed

    Kaliszan, Michał

    2013-09-01

    This paper presents a verification of the thermodynamic model allowing an estimation of the time of death (TOD) by calculating the post mortem interval (PMI) based on a single eyeball temperature measurement at the death scene. The study was performed on 30 cases with known PMI, ranging from 1h 35min to 5h 15min, using pin probes connected to a high precision electronic thermometer (Dostmann-electronic). The measured eye temperatures ranged from 20.2 to 33.1°C. Rectal temperature was measured at the same time and ranged from 32.8 to 37.4°C. Ambient temperatures which ranged from -1 to 24°C, environmental conditions (still air to light wind) and the amount of hair on the head were also recorded every time. PMI was calculated using a formula based on Newton's law of cooling, previously derived and successfully tested in comprehensive studies on pigs and a few human cases. Thanks to both the significantly faster post mortem decrease of eye temperature and a residual or nonexistent plateau effect in the eye, as well as practically no influence of body mass, TOD in the human death cases could be estimated with good accuracy. The highest TOD estimation error during the post mortem intervals up to around 5h was 1h 16min, 1h 14min and 1h 03min, respectively in three cases among 30, while for the remaining 27 cases it was not more than 47min. The mean error for all 30 cases was ±31min. All that indicates that the proposed method is of quite good precision in the early post mortem period, with an accuracy of ±1h for a 95% confidence interval. On the basis of the presented method, TOD can be also calculated at the death scene with the use of a proposed portable electronic device (TOD-meter). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Clinician time used for decision making: a best case workflow study using cardiovascular risk assessments and Ask Mayo Expert algorithmic care process models.

    PubMed

    North, Frederick; Fox, Samuel; Chaudhry, Rajeev

    2016-07-20

    Risk calculation is increasingly used in lipid management, congestive heart failure, and atrial fibrillation. The risk scores are then used for decisions about statin use, anticoagulation, and implantable defibrillator use. Calculating risks for patients and making decisions based on these risks is often done at the point of care and is an additional time burden for clinicians that can be decreased by automating the tasks and using clinical decision-making support. Using Morae Recorder software, we timed 30 healthcare providers tasked with calculating the overall risk of cardiovascular events, sudden death in heart failure, and thrombotic event risk in atrial fibrillation. Risk calculators used were the American College of Cardiology Atherosclerotic Cardiovascular Disease risk calculator (AHA-ASCVD risk), Seattle Heart Failure Model (SHFM risk), and CHA2DS2VASc. We also timed the 30 providers using Ask Mayo Expert care process models for lipid management, heart failure management, and atrial fibrillation management based on the calculated risk scores. We used the Mayo Clinic primary care panel to estimate time for calculating an entire panel risk. Mean provider times to complete the CHA2DS2VASc, AHA-ASCVD risk, and SHFM were 36, 45, and 171 s respectively. For decision making about atrial fibrillation, lipids, and heart failure, the mean times (including risk calculations) were 85, 110, and 347 s respectively. Even under best case circumstances, providers take a significant amount of time to complete risk assessments. For a complete panel of patients this can lead to hours of time required to make decisions about prescribing statins, use of anticoagulation, and medications for heart failure. Informatics solutions are needed to capture data in the medical record and serve up automatically calculated risk assessments to physicians and other providers at the point of care.

  20. Ecosystem Services Insights into Water Resources Management in China: A Case of Xi'an City.

    PubMed

    Liu, Jingya; Li, Jing; Gao, Ziyi; Yang, Min; Qin, Keyu; Yang, Xiaonan

    2016-11-24

    Global climate and environmental changes are endangering global water resources; and several approaches have been tested to manage and reduce the pressure on these decreasing resources. This study uses the case study of Xi'an City in China to test reasonable and effective methods to address water resource shortages. The study generated a framework combining ecosystem services and water resource management. Seven ecosystem indicators were classified as supply services, regulating services, or cultural services. Index values for each indicator were calculated, and based on questionnaire results, each index's weight was calculated. Using the Likert method, we calculated ecosystem service supplies in every region of the city. We found that the ecosystem's service capability is closely related to water resources, providing a method for managing water resources. Using Xi'an City as an example, we apply the ecosystem services concept to water resources management, providing a method for decision makers.

  1. Navier-Stokes calculations on multi-element airfoils using a chimera-based solver

    NASA Technical Reports Server (NTRS)

    Jasper, Donald W.; Agrawal, Shreekant; Robinson, Brian A.

    1993-01-01

    A study of Navier-Stokes calculations of flows about multielement airfoils using a chimera grid approach is presented. The chimera approach utilizes structured, overlapped grids which allow great flexibility of grid arrangement and simplifies grid generation. Calculations are made for two-, three-, and four-element airfoils, and modeling of the effect of gap distance between elements is demonstrated for a two element case. Solutions are obtained using the thin-layer form of the Reynolds averaged Navier-Stokes equations with turbulence closure provided by the Baldwin-Lomax algebraic model or the Baldwin-Barth one equation model. The Baldwin-Barth turbulence model is shown to provide better agreement with experimental data and to dramatically improve convergence rates for some cases. Recently developed, improved farfield boundary conditions are incorporated into the solver for greater efficiency. Computed results show good comparison with experimental data which include aerodynamic forces, surface pressures, and boundary layer velocity profiles.

  2. Statistical analysis of QC data and estimation of fuel rod behaviour

    NASA Astrophysics Data System (ADS)

    Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.

    1991-02-01

    The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Townsend, D.W.; Linnhoff, B.

    In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less

  4. Spectral scalability and optical spectra of fractal multilayer structures: FDTD analysis

    NASA Astrophysics Data System (ADS)

    Simsek, Sevket; Palaz, Selami; Mamedov, Amirullah M.; Ozbay, Ekmel

    2017-01-01

    An investigation of the optical properties and band structures for the conventional and Fibonacci photonic crystals (PCs) based on SrTiO3 and Sb2Te3 is made in the present research. Here, we use one-dimensional SrTiO3- and Sb2Te3-based layers. We have theoretically calculated the photonic band structure and transmission spectra of SrTiO3- and Sb2Te3-based PC superlattices. The position of minima in the transmission spectrum correlates with the gaps obtained in the calculation. The intensity of the transmission depths is more intense in the case of higher refractive index contrast between the layers.

  5. Metastatic Renal Cell Carcinoma Masquerading as Jugular Foramen Paraganglioma: A Role for Novel Magnetic Resonance Imaging.

    PubMed

    Thomas, Andrew J; Wiggins, Richard H; Gurgel, Richard K

    2017-08-01

    To describe a case of metastatic renal cell carcinoma (RCC) masquerading as a jugular foramen paraganglioma (JP). To compare imaging findings between skull base metastatic RCC and histologically proven paraganglioma. A case of unexpected metastatic skull base RCC is reviewed. Computed tomography (CT) and magnetic resonance imaging (MRI) were compared between 3 confirmed cases of JP and our case of metastatic RCC. Diffusion-weighted MRI (DW-MRI) sequences and computed apparent diffusion coefficient (ADC) values were compared between these entities. A 55-year-old man presents with what appears clinically and radiographically to be JP. The tumor was resected, then discovered on postoperative pathology to be metastatic RCC. Imaging was retrospectively compared between 3 histologically confirmed cases of JP and our case of skull base RCC. The RCC metastasis was indistinguishable from JP on CT and traditional MRI but distinct by ADC values calculated from DW-MRI. Metastatic RCC at the skull base may mimic the clinical presentation and radiographic appearance of JP. The MRI finding of flow voids is seen in both paraganglioma and metastatic RCC. Diffusion-weighted MRI is able to distinguish these entities, highlighting its potential utility in distinguishing skull base lesions.

  6. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  7. Tumor-based case-control studies of infection and cancer: muddling the when and where of molecular epidemiology.

    PubMed

    Engels, Eric A; Wacholder, Sholom; Katki, Hormuzd A; Chaturvedi, Anil K

    2014-10-01

    We describe the "tumor-based case-control" study as a type of epidemiologic study used to evaluate associations between infectious agents and cancer. These studies assess exposure using diseased tissues from affected individuals (i.e., evaluating tumor tissue for cancer cases), but they must utilize nondiseased tissues to assess control subjects, who do not have the disease of interest. This approach can lead to exposure misclassification in two ways. First, concerning the "when" of exposure assessment, retrospective assessment of tissues may not accurately measure exposure at the key earlier time point (i.e., during the etiologic window). Second, concerning the "where" of exposure assessment, use of different tissues in cases and controls can have different accuracy for detecting the exposure (i.e., differential exposure misclassification). We present an example concerning the association of human papillomavirus with various cancers, where tumor-based case-control studies likely overestimate risk associated with infection. In another example, we illustrate how tumor-based case-control studies of Helicobacter pylori and gastric cancer underestimate risk. Tumor-based case-control studies can demonstrate infection within tumor cells, providing qualitative information about disease etiology. However, measures of association calculated in tumor-based case-control studies are prone to over- or underestimating the relationship between infections and subsequent cancer risk. ©2014 American Association for Cancer Research.

  8. Rough case-based reasoning system for continues casting

    NASA Astrophysics Data System (ADS)

    Su, Wenbin; Lei, Zhufeng

    2018-04-01

    The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.

  9. Approximate relations and charts for low-speed stability derivatives of swept wings

    NASA Technical Reports Server (NTRS)

    Toll, Thomas A; Queijo, M J

    1948-01-01

    Contains derivations, based on a simplified theory, of approximate relations for low-speed stability derivatives of swept wings. Method accounts for the effects and, in most cases, taper ratio. Charts, based on the derived relations, are presented for the stability derivatives of untapered swept wings. Calculated values of the derivatives are compared with experimental results.

  10. Towards real-time photon Monte Carlo dose calculation in the cloud

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-01

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  11. Towards real-time photon Monte Carlo dose calculation in the cloud.

    PubMed

    Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-07

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  12. Moist air state above counterflow wet-cooling tower fill based on Merkel, generalised Merkel and Klimanek & Białecky models

    NASA Astrophysics Data System (ADS)

    Hyhlík, Tomáš

    2017-09-01

    The article deals with an evaluation of moist air state above counterflow wet-cooling tower fill. The results based on Klimanek & Białecky model are compared with results of Merkel model and generalised Merkel model. Based on the numerical simulation it is shown that temperature is predicted correctly by using generalised Merkel model in the case of saturated or super-saturated air above the fill, but the temperature is underpredicted in the case of unsaturated moist air above the fill. The classical Merkel model always under predicts temperature above the fill. The density of moist air above the fill, which is calculated using generalised Merkel model, is strongly over predicted in the case of unsaturated moist air above the fill.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Liu, B; Liang, B

    Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less

  14. New methodology for fast prediction of wheel wear evolution

    NASA Astrophysics Data System (ADS)

    Apezetxea, I. S.; Perez, X.; Casanueva, C.; Alonso, A.

    2017-07-01

    In railway applications wear prediction in the wheel-rail interface is a fundamental matter in order to study problems such as wheel lifespan and the evolution of vehicle dynamic characteristic with time. However, one of the principal drawbacks of the existing methodologies for calculating the wear evolution is the computational cost. This paper proposes a new wear prediction methodology with a reduced computational cost. This methodology is based on two main steps: the first one is the substitution of the calculations over the whole network by the calculation of the contact conditions in certain characteristic point from whose result the wheel wear evolution can be inferred. The second one is the substitution of the dynamic calculation (time integration calculations) by the quasi-static calculation (the solution of the quasi-static situation of a vehicle at a certain point which is the same that neglecting the acceleration terms in the dynamic equations). These simplifications allow a significant reduction of computational cost to be obtained while maintaining an acceptable level of accuracy (error order of 5-10%). Several case studies are analysed along the paper with the objective of assessing the proposed methodology. The results obtained in the case studies allow concluding that the proposed methodology is valid for an arbitrary vehicle running through an arbitrary track layout.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Y; Lacroix, F; Lavallee, M

    Purpose: To evaluate the commercially released Collapsed Cone convolution-based(CCC) dose calculation module of the Elekta OncentraBrachy(OcB) treatment planning system(TPS). Methods: An allwater phantom was used to perform TG43 benchmarks with single source and seventeen sources, separately. Furthermore, four real-patient heterogeneous geometries (chestwall, lung, breast and prostate) were used. They were selected based on their clinical representativity of a class of clinical anatomies that pose clear challenges. The plans were used as is(no modification). For each case, TG43 and CCC calculations were performed in the OcB TPS, with TG186-recommended materials properly assigned to ROIs. For comparison, Monte Carlo simulation was runmore » for each case with the same material scheme and grid mesh as TPS calculations. Both modes of CCC (standard and high quality) were tested. Results: For the benchmark case, the CCC dose, when divided by that of TG43, yields hot-n-cold spots in a radial pattern. The pattern of the high mode is denser than that of the standard mode and is representative of angular dicretization. The total deviation ((hot-cold)/TG43) is 18% for standard mode and 11% for high mode. Seventeen dwell positions help to reduce “ray-effect”, with the total deviation to 6% (standard) and 5% (high), respectively. For the four patient cases, CCC produces, as expected, more realistic dose distributions than TG43. A close agreement was observed between CCC and MC for all isodose lines, from 20% and up; the 10% isodose line of CCC appears shifted compared to that of MC. The DVH plots show dose deviations of CCC from MC in small volume, high dose regions (>100% isodose). For patient cases, the difference between standard and high modes is almost undiscernable. Conclusion: OncentraBrachy CCC algorithm marks a significant dosimetry improvement relative to TG43 in real-patient cases. Further researches are recommended regarding the clinical implications of the above observations. Support provided by a CIHR grant and CCC system provided by Elekta-Nucletron.« less

  16. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.

  17. Origin of Starting Earthquakes under Complete Coupling of the Lithosphere Plates and a Base

    NASA Astrophysics Data System (ADS)

    Babeshko, V. A.; Evdokimova, O. V.; Babeshko, O. M.; Zaretskaya, M. V.; Gorshkova, E. M.; Mukhin, A. S.; Gladskoi, I. B.

    2018-02-01

    The boundary problem of rigid coupling of lithospheric plates modeled by Kirchhoff plates with a base represented by a three-dimensional deformable layered medium is considered. The possibility of occurrence of a starting earthquake in such a block structure is investigated. For this purpose, two states of this medium in the static mode are considered. In the first case, the semi-infinite lithospheric plates in the form of half-planes are at a distance so that the distance between the end faces is different from zero. In the second case, the lithospheric plates come together to zero spacing between them. Calculations have shown that in this case more complex movements of the Earth's surface are possible. Among such movements are the cases described in our previous publications [1, 2].

  18. Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit

    PubMed Central

    2012-01-01

    Background Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. Methods The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. Results The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. Conclusions The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable. PMID:22971539

  19. Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit.

    PubMed

    Rutonjski, Laza; Petrović, Borislava; Baucal, Milutin; Teodorović, Milan; Cudić, Ozren; Gershkevitsh, Eduard; Izewska, Joanna

    2012-09-12

    Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable.

  20. Electron-ion collision-frequency for x-ray Thomson scattering in dense plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faussurier, Gérald, E-mail: gerald.faussurier@cea.fr; Blancard, Christophe

    2016-01-15

    Two methods are presented to calculate the electron-ion collision-frequency in dense plasmas using an average-atom model. The first one is based on the Kubo-Greenwood approach. The second one uses the Born and Lenard-Balescu approximations. The two methods are used to calculate x-ray Thomson scattering spectra. Illustrations are shown for dense beryllium and aluminum plasmas. Comparisons with experiment are presented in the case of an x-ray Thomson scattering spectrum.

  1. Measuring Decision-Making During Thyroidectomy: Validity Evidence for a Web-Based Assessment Tool.

    PubMed

    Madani, Amin; Gornitsky, Jordan; Watanabe, Yusuke; Benay, Cassandre; Altieri, Maria S; Pucher, Philip H; Tabah, Roger; Mitmaker, Elliot J

    2018-02-01

    Errors in judgment during thyroidectomy can lead to recurrent laryngeal nerve injury and other complications. Despite the strong link between patient outcomes and intraoperative decision-making, methods to evaluate these complex skills are lacking. The purpose of this study was to develop objective metrics to evaluate advanced cognitive skills during thyroidectomy and to obtain validity evidence for them. An interactive online learning platform was developed ( www.thinklikeasurgeon.com ). Trainees and surgeons from four institutions completed a 33-item assessment, developed based on a cognitive task analysis and expert Delphi consensus. Sixteen items required subjects to make annotations on still frames of thyroidectomy videos, and accuracy scores were calculated based on an algorithm derived from experts' responses ("visual concordance test," VCT). Seven items were short answer (SA), requiring users to type their answers, and scores were automatically calculated based on their similarity to a pre-populated repertoire of correct responses. Test-retest reliability, internal consistency, and correlation of scores with self-reported experience and training level (novice, intermediate, expert) were calculated. Twenty-eight subjects (10 endocrine surgeons and otolaryngologists, 18 trainees) participated. There was high test-retest reliability (intraclass correlation coefficient = 0.96; n = 10) and internal consistency (Cronbach's α = 0.93). The assessment demonstrated significant differences between novices, intermediates, and experts in total score (p < 0.01), VCT score (p < 0.01) and SA score (p < 0.01). There was high correlation between total case number and total score (ρ = 0.95, p < 0.01), between total case number and VCT score (ρ = 0.93, p < 0.01), and between total case number and SA score (ρ = 0.83, p < 0.01). This study describes the development of novel metrics and provides validity evidence for an interactive Web-based platform to objectively assess decision-making during thyroidectomy.

  2. Multi-scale modeling of CO2 dispersion leaked from seafloor off the Japanese coast.

    PubMed

    Kano, Yuki; Sato, Toru; Kita, Jun; Hirabayashi, Shinichiro; Tabeta, Shigeru

    2010-02-01

    A numerical simulation was conducted to predict the change of pCO(2) in the ocean caused by CO(2) leaked from an underground aquifer, in which CO(2) is purposefully stored. The target space of the present model was the ocean above the seafloor. The behavior of CO(2) bubbles, their dissolution, and the advection-diffusion of dissolved CO(2) were numerically simulated. Here, two cases for the leakage rate were studied: an extreme case, 94,600 t/y, which assumed that a large fault accidentally connects the CO(2) reservoir and the seafloor; and a reasonable case, 3800 t/y, based on the seepage rate of an existing EOR site. In the extreme case, the calculated increase in DeltapCO(2) experienced by floating organisms was less than 300 ppm, while that for immobile organisms directly over the fault surface periodically exceeded 1000 ppm, if momentarily. In the reasonable case, the calculated DeltapCO(2) and pH were within the range of natural fluctuation. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. [Prospective performance evaluation of first trimester screenings in Germany for risk calculation through http://www.firsttrimester.net].

    PubMed

    Kleinsorge, F; Smetanay, K; Rom, J; Hörmansdörfer, C; Hörmannsdörfer, C; Scharf, A; Schmidt, P

    2010-12-01

    In 2008, 2 351 first trimester screenings were calculated by a newly developed internet database ( http:// www.firsttrimester.net ) to evaluate the risk for the presence of Down's syndrome. All data were evaluated by the conventional first trimester screening according to Nicolaides (FTS), based on the previous JOY Software, and by the advanced first trimester screening (AFS). After receiving the feedback of the karyotype as well as the rates of the correct positives, correct negatives, false positives, false negatives, the sensitivity and specificity were calculated and compared. Overall 255 cases were investigated which were analysed by both methods. These included 2 cases of Down's syndrome and one case of trisomy 18. The FTS and the AFS had a sensitivity of 100%. The specificity was 88.5% for the FTS and 93.0% for the AFS. As already shown in former studies, the higher specificity of the AFS is a result of a reduction of the false positive rate (28 to 17 cases). As a consequence of the AFS with a detection rate of 100% the rate of further invasive diagnostics in pregnant women is decreased by having 39% fewer positive tested women. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Health risk among asbestos cement sheet manufacturing workers in Thailand.

    PubMed

    Phanprasit, Wantanee; Sujirarat, Dusit; Chaikittiporn, Chalermchai

    2009-12-01

    To assess asbestos exposure and calculate the relative risks of lung cancer among asbestos cement roof sheet workers and to predict the incidence rate of lung cancer caused by asbestos in Thailand. A cross-sectional study was conducted in four asbestos cement roof factories. Both area and personal air samples were collected and analyzed employing NIOSH method # 7400 and counting rule A for all procesess and activities. The time weight average exposures were calculated for each studied task using average area concentrations of the mill and personal concentrations. Then, cumulative exposures were estimated based on the past nation-wide air sampling concentrations and those from the present study. The relative risk (RR) of lung cancer among asbestos cement sheet workers was calculated and the number of asbestos related lung cancer case was estimated. The roof fitting polishers had the highest exposure to airborne asbestos fiber (0.73 fiber/ml). The highest average area concentration was at the conveyor to the de-bagger areas (0.02 fiber/ml). The estimated cumulative exposure for the workers performed studied-tasks ranged in between 90.13-115.65 fiber-years/ml while the relative risk of lung cancer calculated using US. EPA's model were 5.37-5.96. Based on the obtained RR, lung cancer among AC sheet in Thailand would be 2 case/year. In case that AC sheet will not be prohibited from being manufactured, even though only chrysotile is allowed, the surveillance system should be further developed and more seriously implemented. The better control measures for all processes must be implemented. Furthermore, due to the environmental persistence of asbestos fiber, its life cycle analysis should be conducted in order to control environmental exposure of general population.

  5. Locating, characterizing and minimizing sources of error for a paper case-based structured oral examination in a multi-campus clerkship.

    PubMed

    Kumar, A; Bridgham, R; Potts, M; Gushurst, C; Hamp, M; Passal, D

    2001-01-01

    To determine consistency of assessment in a new paper case-based structured oral examination in a multi-community pediatrics clerkship, and to identify correctable problems in the administration of examination and assessment process. Nine paper case-based oral examinations were audio-taped. From audio-tapes five community coordinators scored examiner behaviors and graded student performance. Correlations among examiner behaviors scores were examined. Graphs identified grading patterns of evaluators. The effect of exam-giving on evaluators was assessed by t-test. Reliability of grades was calculated and the effect of reducing assessment problems was modeled. Exam-givers differed most in their "teaching-guiding" behavior, and this negatively correlated with student grades. Exam reliability was lowered mainly by evaluator differences in leniency and grading pattern; less important was absence of standardization in cases. While grade reliability was low in early use of the paper case-based oral examination, modeling of plausible effects of training and monitoring for greater uniformity in administration of the examination and assigning scores suggests that more adequate reliabilities can be attained.

  6. A case study of a multiply talented savant with an autism spectrum disorder: neuropsychological functioning and brain morphometry.

    PubMed

    Wallace, Gregory L; Happé, Francesca; Giedd, Jay N

    2009-05-27

    Neuropsychological functioning and brain morphometry in a savant (case GW) with an autism spectrum disorder (ASD) and both calendar calculation and artistic skills are quantified and compared with small groups of neurotypical controls. Good memory, mental calculation and visuospatial processing, as well as (implicit) knowledge of calendar structure and 'weak' central coherence characterized the cognitive profile of case GW. Possibly reflecting his savant skills, the superior parietal region of GW's cortex was the only area thicker (while areas such as the superior and medial prefrontal, middle temporal and motor cortices were thinner) than that of a neurotypical control group. Taken from the perspective of learning/practice-based models, skills in domains (e.g. calendars, art, music) that capitalize upon strengths often associated with ASD, such as detail-focused processing, are probably further enhanced through over-learning and massive exposure, and reflected in atypical brain structure.

  7. A case study of a multiply talented savant with an autism spectrum disorder: neuropsychological functioning and brain morphometry

    PubMed Central

    Wallace, Gregory L.; Happé, Francesca; Giedd, Jay N.

    2009-01-01

    Neuropsychological functioning and brain morphometry in a savant (case GW) with an autism spectrum disorder (ASD) and both calendar calculation and artistic skills are quantified and compared with small groups of neurotypical controls. Good memory, mental calculation and visuospatial processing, as well as (implicit) knowledge of calendar structure and ‘weak’ central coherence characterized the cognitive profile of case GW. Possibly reflecting his savant skills, the superior parietal region of GW's cortex was the only area thicker (while areas such as the superior and medial prefrontal, middle temporal and motor cortices were thinner) than that of a neurotypical control group. Taken from the perspective of learning/practice-based models, skills in domains (e.g. calendars, art, music) that capitalize upon strengths often associated with ASD, such as detail-focused processing, are probably further enhanced through over-learning and massive exposure, and reflected in atypical brain structure. PMID:19528026

  8. BioFET-SIM web interface: implementation and two applications.

    PubMed

    Hediger, Martin R; Jensen, Jan H; De Vico, Luca

    2012-01-01

    We present a web interface which allows us to conveniently set up calculations based on the BioFET-SIM model. With the interface, the signal of a BioFET sensor can be calculated depending on its parameters, as well as the signal dependence on pH. As an illustration, two case studies are presented. In the first case, a generic peptide with opposite charges on both ends is inverted in orientation on a semiconducting nanowire surface leading to a corresponding change in sign of the computed sensitivity of the device. In the second case, the binding of an antibody/antigen complex on the nanowire surface is studied in terms of orientation and analyte/nanowire surface distance. We demonstrate how the BioFET-SIM web interface can aid in the understanding of experimental data and postulate alternative ways of antibody/antigen orientation on the nanowire surface.

  9. Minimizing the IOL power error induced by keratometric power.

    PubMed

    Camps, Vicente J; Piñero, David P; de Fez, Dolores; Mateo, Verónica

    2013-07-01

    To evaluate theoretically in normal eyes the influence on IOL power (PIOL) calculation of the use of a keratometric index (nk) and to analyze and validate preliminarily the use of an adjusted keratometric index (nkadj) in the IOL power calculation (PIOLadj). A model of variable keratometric index (nkadj) for corneal power calculation (Pc) was used for IOL power calculation (named PIOLadj). Theoretical differences (ΔPIOL) between the new proposed formula (PIOLadj) and which is obtained through Gaussian optics ((Equation is included in full-text article.)) were determined using Gullstrand and Le Grand eye models. The proposed new formula for IOL power calculation (PIOLadj) was prevalidated clinically in 81 eyes of 81 candidates for corneal refractive surgery and compared with Haigis, HofferQ, Holladay, and SRK/T formulas. A theoretical PIOL underestimation greater than 0.5 diopters was present in most of the cases when nk = 1.3375 was used. If nkadj was used for Pc calculation, a maximal calculated error in ΔPIOL of ±0.5 diopters at corneal vertex in most cases was observed independently from the eye model, r1c, and the desired postoperative refraction. The use of nkadj in IOL power calculation (PIOLadj) could be valid with effective lens position optimization nondependent of the corneal power. The use of a single value of nk for Pc calculation can lead to significant errors in PIOL calculation that may explain some IOL power overestimations with conventional formulas. These inaccuracies can be minimized by using the new PIOLadj based on the algorithm of nkadj.

  10. The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool

    PubMed Central

    Stephen, Cook; Benjamin, Longo-Mbenza

    2013-01-01

    AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097

  11. A case study of view-factor rectification procedures for diffuse-gray radiation enclosure computations

    NASA Technical Reports Server (NTRS)

    Taylor, Robert P.; Luck, Rogelio

    1995-01-01

    The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.

  12. Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation

    NASA Astrophysics Data System (ADS)

    Frybort, Jan

    2017-09-01

    Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.

  13. Opening of DNA chain due to force applied on different locations.

    PubMed

    Singh, Amar; Modi, Tushar; Singh, Navin

    2016-09-01

    We consider a homogeneous DNA molecule and investigate the effect of random force applied on the unzipping profile of the molecule. How the critical force varies as a function of the chain length or number of base pairs is the objective of this study. In general, the ratio of the critical forces that is applied on the middle of the chain to that which is applied on one of the ends is two. Our study shows that this ratio depends on the length of the chain. This means that the force which is applied to a point can be experienced by a section of the chain. Beyond a length, the base pairs have no information about the applied force. In the case when the chain length is shorter than this length, this ratio may vary. Only in the case when the chain length exceeds a critical length, this ratio is found to be two. Based on the de Gennes formulation, we developed a method to calculate these forces at zero temperature. The exact results at zero temperature match numerical calculations.

  14. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  15. An open source software for fast grid-based data-mining in spatial epidemiology (FGBASE).

    PubMed

    Baker, David M; Valleron, Alain-Jacques

    2014-10-30

    Examining whether disease cases are clustered in space is an important part of epidemiological research. Another important part of spatial epidemiology is testing whether patients suffering from a disease are more, or less, exposed to environmental factors of interest than adequately defined controls. Both approaches involve determining the number of cases and controls (or population at risk) in specific zones. For cluster searches, this often must be done for millions of different zones. Doing this by calculating distances can lead to very lengthy computations. In this work we discuss the computational advantages of geographical grid-based methods, and introduce an open source software (FGBASE) which we have created for this purpose. Geographical grids based on the Lambert Azimuthal Equal Area projection are well suited for spatial epidemiology because they preserve area: each cell of the grid has the same area. We describe how data is projected onto such a grid, as well as grid-based algorithms for spatial epidemiological data-mining. The software program (FGBASE), that we have developed, implements these grid-based methods. The grid based algorithms perform extremely fast. This is particularly the case for cluster searches. When applied to a cohort of French Type 1 Diabetes (T1D) patients, as an example, the grid based algorithms detected potential clusters in a few seconds on a modern laptop. This compares very favorably to an equivalent cluster search using distance calculations instead of a grid, which took over 4 hours on the same computer. In the case study we discovered 4 potential clusters of T1D cases near the cities of Le Havre, Dunkerque, Toulouse and Nantes. One example of environmental analysis with our software was to study whether a significant association could be found between distance to vineyards with heavy pesticide. None was found. In both examples, the software facilitates the rapid testing of hypotheses. Grid-based algorithms for mining spatial epidemiological data provide advantages in terms of computational complexity thus improving the speed of computations. We believe that these methods and this software tool (FGBASE) will lower the computational barriers to entry for those performing epidemiological research.

  16. Second order tensor finite element

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, J.; Berry, C.; Tworzydlo, W.; Vadaketh, S.; Bass, J.

    1990-01-01

    The results of a research and software development effort are presented for the finite element modeling of the static and dynamic behavior of anisotropic materials, with emphasis on single crystal alloys. Various versions of two dimensional and three dimensional hybrid finite elements were implemented and compared with displacement-based elements. Both static and dynamic cases are considered. The hybrid elements developed in the project were incorporated into the SPAR finite element code. In an extension of the first phase of the project, optimization of experimental tests for anisotropic materials was addressed. In particular, the problem of calculating material properties from tensile tests and of calculating stresses from strain measurements were considered. For both cases, numerical procedures and software for the optimization of strain gauge and material axes orientation were developed.

  17. Development of Condensing Mesh Method for Corner Domain at Numerical Simulation Magnetic System

    NASA Astrophysics Data System (ADS)

    Perepelkin, E.; Tarelkin, A.; Polyakova, R.; Kovalenko, A.

    2018-05-01

    A magnetostatic problem arises in searching for the distribution of the magnetic field generated by magnet systems of many physics research facilities, e.g., accelerators. The domain in which the boundaryvalue problem is solved often has a piecewise smooth boundary. In this case, numerical calculations of the problem require the consideration of the solution behavior in the corner domain. In this work we obtained the upper estimation of the magnetic field growth and propose a method of condensing the differential grid near the corner domain of vacuum in case of 3-dimensional space based on this estimation. An example of calculating a real model problem for SDP NICA in the domain containing a corner point is given.

  18. Accurate reporting of adherence to inhaled therapies in adults with cystic fibrosis: methods to calculate “normative adherence”

    PubMed Central

    Hoo, Zhe Hui; Curley, Rachael; Campbell, Michael J; Walters, Stephen J; Hind, Daniel; Wildman, Martin J

    2016-01-01

    Background Preventative inhaled treatments in cystic fibrosis will only be effective in maintaining lung health if used appropriately. An accurate adherence index should therefore reflect treatment effectiveness, but the standard method of reporting adherence, that is, as a percentage of the agreed regimen between clinicians and people with cystic fibrosis, does not account for the appropriateness of the treatment regimen. We describe two different indices of inhaled therapy adherence for adults with cystic fibrosis which take into account effectiveness, that is, “simple” and “sophisticated” normative adherence. Methods to calculate normative adherence Denominator adjustment involves fixing a minimum appropriate value based on the recommended therapy given a person’s characteristics. For simple normative adherence, the denominator is determined by the person’s Pseudomonas status. For sophisticated normative adherence, the denominator is determined by the person’s Pseudomonas status and history of pulmonary exacerbations over the previous year. Numerator adjustment involves capping the daily maximum inhaled therapy use at 100% so that medication overuse does not artificially inflate the adherence level. Three illustrative cases Case A is an example of inhaled therapy under prescription based on Pseudomonas status resulting in lower simple normative adherence compared to unadjusted adherence. Case B is an example of inhaled therapy under-prescription based on previous exacerbation history resulting in lower sophisticated normative adherence compared to unadjusted adherence and simple normative adherence. Case C is an example of nebulizer overuse exaggerating the magnitude of unadjusted adherence. Conclusion Different methods of reporting adherence can result in different magnitudes of adherence. We have proposed two methods of standardizing the calculation of adherence which should better reflect treatment effectiveness. The value of these indices can be tested empirically in clinical trials in which there is careful definition of treatment regimens related to key patient characteristics, alongside accurate measurement of health outcomes. PMID:27284242

  19. [How does the German DRG system differentiate and reimburse vitreoretinal surgery in diabetic patients?].

    PubMed

    Krause, M; Goldschmidt, A J; Berg, M; Kropf, S; Sachs, A; Gatzioufas, Z; Brückner, K; Seitz, B

    2008-10-01

    The German DRG system (G-DRG system) is required to assign medical cases with similar costs correctly into a particular group, each case within the group receiving the same amount of reimbursement. At the same time the system should allow all-inclusive reimbursement, not necessarily reflecting the exact costs of each case. These opposite goals and the so far limited calculation basis raise the question of how the G-DRG system actually processes and reimburses empirically collected in-hospital treatment data. In 2005, 112 patients were admitted to the University Eye Hospital, University of the Saarland. All patients had diabetic retinopathy and required at least one vitreoretinal procedure. Demographic and clinical data were collected by using the hospital information system and the coding software KODIP. For statistic evaluation, principal diagnoses, ancillary diagnoses and procedures were each reassigned to particular groups. Reimbursement was calculated based on the case data of the year 2005. Also, the case data were reassigned with respect to calculation of reimbursement for the years 2006 and 2007. The results were compared with federal G-DRG calculation data. Mean age of the patients was 65.8 +/- 11.1 years, length of stay in-hospital was 9.3 +/- 3.2 days. In the 66 patients requiring general anaesthesia the cumulative length of stay in the operation room was 148.4 +/- 39.5 minutes, the cumulative duration of surgery was 86.3 +/- 34.1 minutes. In the 50 patients requiring local anaesthesia the cumulative length of stay in the operation room was 137.8 +/- 51.8 minutes, the cumulative duration of surgery was 81.6 +/- 43.6 minutes. The patients had 1.9 +/- 0.8 principal diagnoses, 14.4 +/- 5.8 ancillary diagnoses and 3.4 +/- 1.6 procedures. Twenty-five of 112 patients (22.3 %) were assigned to DRG C 03Z (1), 82 of 112 patients (73.2 %) were assigned to DRG C 17Z (2). Five patients were assigned to other DRG. Compared with the federal calculation data, our own data for 2005, 2006 and 2007 showed more high primary clinical complexity levels and a longer duration of in-hospital stay. For each of the three years the amount of reimbursement was equal in about two thirds of the own patients. Reimbursement was only differentiated for outliers beyond the trim point of the duration of in-hospital stay. The demographic and clinical G-DRG data of the included patients showed substantial cost-effective inhomogeneities. These inhomogeneities were not sufficiently considered for reimbursement based upon Z-DRG. Specialised departments with higher numbers of difficult cases may be discriminated. Wrong incentives may result in the selection of "low-risk cases".

  20. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    PubMed

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.

  1. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  2. Hearing impairment associated with oral terbinafine use: a case series and case/non-case analysis in the Netherlands Pharmacovigilance Centre Lareb database and VigiBase™.

    PubMed

    Scholl, Joep H G; van Puijenbroek, Eugene P

    2012-08-01

    The Netherlands Pharmacovigilance Centre Lareb received reports of six cases of hearing impairment in association with oral terbinafine use. This study describes these cases and provides support for this association from the Lareb database for spontaneous adverse drug reaction (ADR) reporting and from Vigibase™, the ADR database of the WHO Collaborating Centre for International Drug Monitoring, the Uppsala Monitoring Centre. The objective of the current study was to identify whether the observed association between oral terbinafine use and hearing impairment, based on cases received by Lareb, constitutes a safety signal. Cases of hearing impairment in oral terbinafine users are described. In a case/non-case analysis, the strength of the association in Vigibase™ and the Lareb database was determined (date of analysis August 2011) by calculating the reporting odds ratios (RORs), adjusted for possible confounding by age, sex and ototoxic concomitant medication. For the purpose of this study, RORs were calculated for deafness, hypoacusis and the combination of both, defined as hearing impairment. In the Lareb database, six reports concerning individuals aged 31-82 years, who developed hearing impairment after starting oral terbinafine, were present. The use of oral terbinafine was disproportionally associated with hypoacusis in both the Lareb database (adjusted ROR 3.9; 95% CI 1.7, 9.0) and in Vigibase™ (adjusted ROR 1.7; 95% CI 1.0, 2.8). Deafness was not disproportionally present in either of the databases. Based on the described cases and the statistical analyses from both databases, a causal relationship between the use of oral terbinafine and hearing impairment is possible. The mechanism by which terbinafine could cause hearing impairment has not been elucidated yet. The pharmacological action of terbinafine is based on the inhibition of squalene epoxidase, an enzyme present in both fungal and human cells. This inhibition might result in a decrease in cholesterol levels in human cells, among which are the outer hair cells of the cochlea. It may be possible that the reduction in cochlear cholesterol levels leads to impaired cochlear function and possibly hearing impairment. In this study we describe hearing impairment as a possible ADR of oral terbinafine, based on six case reports and statistical support from Vigibase™ and the Lareb database. To our knowledge this association has not been described before.

  3. SU-E-T-541: Measurement of CT Density Model Variations and the Impact On the Accuracy of Monte Carlo (MC) Dose Calculation in Stereotactic Body Radiation Therapy for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiang, H; Li, B; Behrman, R

    2015-06-15

    Purpose: To measure the CT density model variations between different CT scanners used for treatment planning and impact on the accuracy of MC dose calculation in lung SBRT. Methods: A Gammex electron density phantom (RMI 465) was scanned on two 64-slice CT scanners (GE LightSpeed VCT64) and a 16-slice CT (Philips Brilliance Big Bore CT). All three scanners had been used to acquire CT for CyberKnife lung SBRT treatment planning. To minimize the influences of beam hardening and scatter for improving reproducibility, three scans were acquired with the phantom rotated 120° between scans. The mean CT HU of each densitymore » insert, averaged over the three scans, was used to build the CT density models. For 14 patient plans, repeat MC dose calculations were performed by using the scanner-specific CT density models and compared to a baseline CT density model in the base plans. All dose re-calculations were done using the same plan beam configurations and MUs. Comparisons of dosimetric parameters included PTV volume covered by prescription dose, mean PTV dose, V5 and V20 for lungs, and the maximum dose to the closest critical organ. Results: Up to 50.7 HU variations in CT density models were observed over the baseline CT density model. For 14 patient plans examined, maximum differences in MC dose re-calculations were less than 2% in 71.4% of the cases, less than 5% in 85.7% of the cases, and 5–10% for 14.3% of the cases. As all the base plans well exceeded the clinical objectives of target coverage and OAR sparing, none of the observed differences led to clinically significant concerns. Conclusion: Marked variations of CT density models were observed for three different CT scanners. Though the differences can cause up to 5–10% differences in MC dose calculations, it was found that they caused no clinically significant concerns.« less

  4. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    PubMed

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  5. Coupling of atom-by-atom calculations of extended defects with B kick-out equations: application to the simulation of boron ted

    NASA Astrophysics Data System (ADS)

    Lampin, E.; Cristiano, F.; Lamrani, Y.; Colombeau, B.

    2004-02-01

    We present simulations of B TED based on a complete calculation of the extended defect growth/shrinkage during annealing. The Si self-interstitial supersaturation calculated at the extended defect depth is coupled to the set of equations for the B kick-out diffusion through a generation/recombination term in the diffusion equation of the Si self-interstitials. The simulations are compared to the measurements performed on a Si wafer containing several B marker layers, where the amount of TED varies from one peak to the other. The good agreement obtained on this experiment is very promising for the application of these calculations to the case of ultra-shallow B + implants.

  6. [Is there a place for the Glasgow-Blatchford score in the management of upper gastrointestinal bleeding?].

    PubMed

    Jerraya, Hichem; Bousslema, Amine; Frikha, Foued; Dziri, Chadli

    2011-12-01

    Upper gastrointestinal bleeding is a frequent cause for emergency hospital admission. Most severity scores include in their computation the endoscopic findings. The Glasgow-Blatchford score is a validated score that is easy to calculate based on simple clinical and biological variables that can identify patients with a low or a high risk of needing a therapeutic (interventional endoscopy, surgery and/ or transfusions). To validate retrospectively the Glasgow-Blatchford Score (GBS). The study examined all patients admitted in both the general surgery department as of Anesthesiology of the Regional Hospital of Sidi Bouzid. There were 50 patients, which the mean age was 58 years and divided into 35 men and 15 women. In all these patients, we calculated the GBS. Series were divided into 2 groups, 26 cases received only medical treatment and 24 cases required transfusion and / or surgery. Univariate analysis was performed for comparison of these two groups then the ROC curve was used to identify the 'Cut off point' of GBS. Sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) with confidence interval 95% were calculated. The SGB was significantly different between the two groups (p <0.0001). Using the ROC curve, it was determined that for the threshold of GBS ³ 7, Se = 96% (88-100%), Sp = 69% (51-87%), PPV = 74% (59 -90%) and NPV = 95% (85-100%). This threshold is interesting as to its VPN. Indeed, if GBS <7, we must opt for medical treatment to the risk of being wrong in only 5% of cases. The Glasgow-Blatchford score is based on simple clinical and laboratory variables. It can recognize in the emergency department the cases that require medical treatment and those whose support could need blood transfusions and / or surgical treatment.

  7. Transverse parton distribution functions at next-to-next-to-leading order: the quark-to-quark case.

    PubMed

    Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin

    2012-12-14

    We present a calculation of the perturbative quark-to-quark transverse parton distribution function at next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate for the first time that such a definition works beyond the first nontrivial order. We extract from our calculation the coefficient functions relevant for a next-to-next-to-next-to-leading logarithmic Q(T) resummation in a large class of processes at hadron colliders.

  8. BRIEF COMMUNICATION: The negative ion flux across a double sheath at the formation of a virtual cathode

    NASA Astrophysics Data System (ADS)

    McAdams, R.; Bacal, M.

    2010-08-01

    For the case of negative ions from a cathode entering a plasma, the maximum negative ion flux and the positive ion flux before the formation of a virtual cathode have been calculated for particular plasma conditions. The calculation is based on a simple modification of an analysis of electron emission into a plasma containing negative ions. The results are in good agreement with a 1d3v PIC code model.

  9. Solutions of the heat conduction equation in multilayers for photothermal deflection experiments

    NASA Technical Reports Server (NTRS)

    Mcgahan, William A.; Cole, K. D.

    1992-01-01

    Analytical expressions for temperature and laser beam deflection in multilayer medium is derived using Green function techniques. The approach is based on calculation of the normal component of heat fluxes across the boundaries, from which either the beam deflections or the temperature anywhere in space can be found. A general expression for the measured signals for the case of four-quadrant detection is also presented and compared with previous calculations of detector response for finite probe beams.

  10. Discussion on accuracy degree evaluation of accident velocity reconstruction model

    NASA Astrophysics Data System (ADS)

    Zou, Tiefang; Dai, Yingbiao; Cai, Ming; Liu, Jike

    In order to investigate the applicability of accident velocity reconstruction model in different cases, a method used to evaluate accuracy degree of accident velocity reconstruction model is given. Based on pre-crash velocity in theory and calculation, an accuracy degree evaluation formula is obtained. With a numerical simulation case, Accuracy degrees and applicability of two accident velocity reconstruction models are analyzed; results show that this method is feasible in practice.

  11. A variation-perturbation method for atomic and molecular interactions. I - Theory. II - The interaction potential and van der Waals molecule for Ne-HF

    NASA Astrophysics Data System (ADS)

    Gallup, G. A.; Gerratt, J.

    1985-09-01

    The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.

  12. The application of tailor-made force fields and molecular dynamics for NMR crystallography: a case study of free base cocaine

    PubMed Central

    Neumann, Marcus A.

    2017-01-01

    Motional averaging has been proven to be significant in predicting the chemical shifts in ab initio solid-state NMR calculations, and the applicability of motional averaging with molecular dynamics has been shown to depend on the accuracy of the molecular mechanical force field. The performance of a fully automatically generated tailor-made force field (TMFF) for the dynamic aspects of NMR crystallography is evaluated and compared with existing benchmarks, including static dispersion-corrected density functional theory calculations and the COMPASS force field. The crystal structure of free base cocaine is used as an example. The results reveal that, even though the TMFF outperforms the COMPASS force field for representing the energies and conformations of predicted structures, it does not give significant improvement in the accuracy of NMR calculations. Further studies should direct more attention to anisotropic chemical shifts and development of the method of solid-state NMR calculations. PMID:28250956

  13. Thermodynamic Properties and Transport Coefficients of Nitrogen, Hydrogen and Helium Plasma Mixed with Silver Vapor

    NASA Astrophysics Data System (ADS)

    Zhou, Xue; Cui, Xinglei; Chen, Mo; Zhai, Guofu

    2016-05-01

    Species composites of Ag-N2, Ag-H2 and Ag-He plasmas in the temperature range of 3,000-20,000 K and at 1 atmospheric pressure were calculated by using the minimization of Gibbs free energy. Thermodynamic properties and transport coefficients of nitrogen, hydrogen and helium plasmas mixed with a variety of silver vapor were then calculated based on the equilibrium composites and collision integral data. The calculation procedure was verified by comparing the results obtained in this paper with the published transport coefficients on the case of pure nitrogen plasma. The influences of the silver vapor concentration on composites, thermodynamic properties and transport coefficients were finally analyzed and summarized for all the three types of plasmas. Those physical properties were important for theoretical study and numerical calculation on arc plasma generated by silver-based electrodes in those gases in sealed electromagnetic relays and contacts. supported by National Natural Science Foundation of China (Nos. 51277038 and 51307030)

  14. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  15. Solar neutrino masses and mixing from bilinear R-parity broken supersymmetry: Analytical versus numerical results

    NASA Astrophysics Data System (ADS)

    Díaz, M.; Hirsch, M.; Porod, W.; Romão, J.; Valle, J.

    2003-07-01

    We give an analytical calculation of solar neutrino masses and mixing at one-loop order within bilinear R-parity breaking supersymmetry, and compare our results to the exact numerical calculation. Our method is based on a systematic perturbative expansion of R-parity violating vertices to leading order. We find in general quite good agreement between the approximate and full numerical calculations, but the approximate expressions are much simpler to implement. Our formalism works especially well for the case of the large mixing angle Mikheyev-Smirnov-Wolfenstein solution, now strongly favored by the recent KamLAND reactor neutrino data.

  16. SU-E-T-129: Dosimetric Evaluation of the Impact of Density Correction On Dose Calculation of Breast Cancer Treatment: A Study Based On RTOG 1005 Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Yu, Y

    Purpose: RTOG 1005 requires density correction in the dose calculation of breast cancer radiation treatment. The aim of the study was to evaluate the impact of density correction on the dose calculation. Methods: Eight cases were studied, which were planned on an XiO treatment planning system with pixel-by-pixel density correction using a superposition algorithm, following RTOG 1005 protocol requirements. Four were protocol Arm 1 (standard whole breast irradiation with sequential boost) cases and four were Arm 2 (hypofractionated whole breast irradiation with concurrent boost) cases. The plans were recalculated with the same monitor units without density correction. Dose calculations withmore » and without density correction were compared. Results: Results of Arm 1 and Arm 2 cases showed similar trends in the comparison. The average differences between the calculations with and without density correction (difference = Without - With) among all the cases were: -0.82 Gy (range: -2.65∼−0.18 Gy) in breast PTV Eval D95, −0.75 Gy (range: −1.23∼0.26 Gy) in breast PTV Eval D90, −1.00 Gy (range: −2.46∼−0.29 Gy) in lumpectomy PTV Eval D95, −0.78 Gy (range: −1.30∼0.11 Gy) in lumpectomy PTV Eval D90, −0.43% (range: −0.95∼−0.14%) in ipsilateral lung V20, −0.81% (range: −1.62∼−0.26%) in V16, −1.95% (range: −4.13∼−0.84%) in V10, −2.64% (−5.55∼−1.04%) in V8, −4.19% (range: −6.92∼−1.81%) in V5, and −4.95% (range: −7.49∼−2.01%) in V4, respectively. The differences in other normal tissues were minimal. Conclusion: The effect of density correction was observed in breast target doses (an average increase of ∼1 Gy in D95 and D90, compared to the calculation without density correction) and exposed ipsilateral lung volumes in low dose region (average increases of ∼4% and ∼5% in V5 and V4, respectively)« less

  17. An assessment of the CORCON-MOD3 code. Part 1: Thermal-hydraulic calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strizhov, V.; Kanukova, V.; Vinogradova, T.

    1996-09-01

    This report deals with the subject of CORCON-Mod3 code validation (thermal-hydraulic modeling capability only) based on MCCI (molten core concrete interaction) experiments conducted under different programs in the past decade. Thermal-hydraulic calculations (i.e., concrete ablation, melt temperature, melt energy, concrete temperature, and condensible and non-condensible gas generation) were performed with the code, and compared with the data from 15 experiments, conducted at different scales using both simulant (metallic and oxidic) and prototypic melt materials, using different concrete types, and with and without an overlying water pool. Sensitivity studies were performed in a few cases involving, for example, heat transfer frommore » melt to concrete, condensed phase chemistry, etc. Further, special analysis was performed using the ACE L8 experimental data to illustrate the differences between the experimental and the reactor conditions, and to demonstrate that with proper corrections made to the code, the calculated results were in better agreement with the experimental data. Generally, in the case of dry cavity and metallic melts, CORCON-Mod3 thermal-hydraulic calculations were in good agreement with the test data. For oxidic melts in a dry cavity, uncertainties in heat transfer models played an important role for two melt configurations--a stratified geometry with segregated metal and oxide layers, and a heterogeneous mixture. Some discrepancies in the gas release data were noted in a few cases.« less

  18. Application of computational thermodynamics in the study of magnsium alloys and bulk metallic glasses

    NASA Astrophysics Data System (ADS)

    Cao, Hongbo

    In this thesis, the application of the computational thermodynamics has been explored on two subjects, the study of magnesium alloys (Chapter 1-5) and bulk metallic glasses (BMGs) (Chapter 6-9). For the former case, a strategy of experiments coupled with the CALPHAD approach was employed to establish a thermodynamic description of the quaternary system Mg-Al-Ca-Sr focusing on the Mg-rich phase equilibria. Multicomponent Mg-rich alloys based on the MgAl-Ca-Sr system are one of the most promising candidates for the high temperature applications in the transportation industry. The Mg-Al-Ca-Sr quaternary consists of four ternaries and six binaries. Thermodynamic descriptions of all constituent binaries are available in the literature. Thermodynamic descriptions of the two key ternaries, Mg-Al-Sr and Mg-Al-Ca, were obtained by an efficient and reliable methodology, combining computational thermodynamics with key experiments. The obtained thermodynamic descriptions were validated by performing extensive comparisons between the calculations and experimental information. Thermodynamic descriptions of the other two ternaries, MgCa-Sr and Al-Ca-Sr, were obtained by extrapolation. For the later case, a computational thermodynamic strategy was formulated to obtain a minor but optimum amount of additional element into a base alloy to improve its glass forming ability (GFA). This was done through thermodynamically calculating the maximum liquidus depressions caused by various alloying addition (or replacement) schemes. The success of this approach has been examined in two multicomponent systems, Zr-based Zr-Cu-Ni-Al-Ti and Cu-rich Cu-Zr-Ti-Y. For both cases, experimental results showed conclusively that the GFA increases more than 100% from the base alloy to the one with minor but optimal elemental addition. Furthermore, a thermodynamic computational approach was employed to identify the compositions of Zr-Ti-Ni-Cu-Al alloys exhibiting low-lying liquidus surfaces, which tend to favor the BMG formation. Guided by these calculations, several series of new Zr-based alloys with excellent GFA were synthesized. The approach using the thermodynamically calculated liquidus temperatures was proved to be robust in locating BMGs and can be considered as a universal method to predict novel BMGs not only of scientific interest but also potential technological applications.

  19. Analysis of Federal Subsidies: Implied Price of Carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. Craig Cooper; Thomas Foulke

    2010-10-01

    For informed climate change policy, it is important for decision makers to be able to assess how the costs and benefits of federal energy subsidies are distributed and to be able to have some measure to compare them. One way to do this is to evaluate the implied price of carbon (IPC) for a federal subsidy, or set of subsidies; where the IPC is the cost of the subsidy to the U.S. Treasury divided by the emissions reductions it generated. Subsidies with lower IPC are more cost effective at reducing greenhouse gas emissions, while subsidies with a negative IPC actmore » to increase emissions. While simple in concept, the IPC is difficult to calculate in practice. Calculation of the IPC requires knowledge of (i) the amount of energy associated with the subsidy, (ii) the amount and type of energy that would have been produced in the absence of the subsidy, and (iii) the greenhouse gas emissions associated with both the subsidized energy and the potential replacement energy. These pieces of information are not consistently available for federal subsidies, and there is considerable uncertainty in cases where the information is available. Thus, exact values for the IPC based upon fully consistent standards cannot be calculated with available data. However, it is possible to estimate a range of potential values sufficient for initial comparisons. This study has employed a range of methods to generate “first order” estimates for the IPC of a range of federal subsidies using static methods that do not account for the dynamics of supply and demand. The study demonstrates that, while the IPC value depends upon how the inquiry is framed and the IPC cannot be calculated in a “one size fits all” manner, IPC calculations can provide a valuable perspective for climate policy analysis. IPC values are most useful when calculated within the perspective of a case study, with the method and parameters of the calculation determined by the case. The IPC of different policy measures can then be quantitatively evaluated within the case. Results can be qualitatively compared across cases, so long as such comparisons are considered to be preliminary and treated with the appropriate level of caution.« less

  20. [Diagnosis related groups in stroke treatment. An analysis from the stroke data bank of the German Stroke Foundation].

    PubMed

    Weimar, C; Stausberg, J; Kraywinkel, K; Wagner, M; Busse, O; Haberl, R L; Diener, H-C

    2002-08-02

    The upcoming introduction of diagnosis related groups (DRG) as an exclusive base for future calculation of hospital proceeds in Germany requires a thorough analysis of cost data for various diseases. To compare the resulting combined cost weights of the Australian Refined DRG system (AR-DRG) with the proceeds based on actual per-day rates in stroke treatment. Between 1998 and 1999, data from 6520 patients (median age 68 years, 43% women) with acute stroke or transient ischemic attack (TIA) were prospectively documented in 15 departments of Neurology with an acute stroke unit, 9 departments of general Neurology and 6 departments of Internal Medicine. Prior to grouping cases into DRGs, all available data were transferred into ICD-10-SGB-V 2.0 or the Australian procedure system (MBS-Extended). Hospital proceeds for the respective cases were calculated based on per-day rates of the documenting hospitals. The resulting cost weights demonstrate a good homogeneity compared to the length of stay. When introducing the AR-DRG with a uniform base rate in Germany, a relative decrease of hospital proceeds can be expected in Neurology Departments and for treatment of TIAs. Preservation of the existing structure of acute stroke care in Germany requires a supplement to a uniform base rate in Neurology departments.

  1. On the nature of solvatochromic effect: The riboflavin absorption spectrum as a case study

    NASA Astrophysics Data System (ADS)

    Daidone, Isabella; Amadei, Andrea; Aschi, Massimiliano; Zanetti-Polzi, Laura

    2018-03-01

    We present here the calculation of the absorption spectrum of riboflavin in acetonitrile and dimethyl sulfoxide using a hybrid quantum/classical approach, namely the perturbed matrix method, based on quantum mechanical calculations and molecular dynamics simulations. The calculated spectra are compared to the absorption spectrum of riboflavin previously calculated in water and to the experimental spectra obtained in all three solvents. The experimentally observed variations in the absorption spectra upon change of the solvent environment are well reproduced by the calculated spectra. In addition, the nature of the excited states of riboflavin interacting with different solvents is investigated, showing that environment effects determine a recombination of the gas-phase electronic states and that such a recombination is strongly affected by the polarity of the solvent inducing significant changes in the absorption spectra.

  2. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  3. [An analysis of cost and profit of a nursing unit using performance-based costing: case of a general surgical ward in a general hospital].

    PubMed

    Lim, Ji Young

    2008-02-01

    The aim of this study was to analyze net income of a surgical nursing ward in a general hospital. Data collection and analysis was conducted using a performance-based costing and activity-based costing method. Direct nursing activities in the surgical ward were 68, indirect nursing activities were 10. The total cost volume of the surgical ward was calculated at won 119,913,334.5. The cost volume of the allocated medical department was won 91,588,200.3, and the ward consumed cost was won 28,325,134.2. The revenue of the surgical nursing ward was won 33,269,925.0. The expense of a surgical nursing ward was 28,325,134.2. Therefore, the net income of a surgical nursing ward was won 4,944,790.8. We suggest that to develop a more refined nursing cost calculation model, a standard nursing cost calculation system needs to be developed.

  4. Evaluation of a surveillance case definition for anogenital warts, Kaiser Permanente northwest.

    PubMed

    Naleway, Allison L; Weinmann, Sheila; Crane, Brad; Gee, Julianne; Markowitz, Lauri E; Dunne, Eileen F

    2014-08-01

    Most studies of anogenital wart (AGW) epidemiology have used large clinical or administrative databases and unconfirmed case definitions based on combinations of diagnosis and procedure codes. We developed and validated an AGW case definition using a combination of diagnosis codes and other information available in the electronic medical record (provider type, laboratory testing). We calculated the positive predictive value (PPV) of this case definition compared with manual medical record review in a random sample of 250 cases. Using this case definition, we calculated the annual age- and sex-stratified prevalence of AGW among individuals 11 through 30 years of age from 2000 through 2005. We identified 2730 individuals who met the case definition. The PPV of the case definition was 82%, and the average annual prevalence was 4.16 per 1000. Prevalence of AGW was higher in females compared with males in every age group, with the exception of the 27- to 30-year-olds. Among females, prevalence peaked in the 19- to 22-year-olds, and among males, the peak was observed in 23- to 26-year-olds. The case definition developed in this study is the first to be validated with medical record review and has a good PPV for the detection of AGW. The prevalence rates observed in this study were higher than other published rates, but the age- and sex-specific patterns observed were consistent with previous reports.

  5. Ecosystem Services Insights into Water Resources Management in China: A Case of Xi’an City

    PubMed Central

    Liu, Jingya; Li, Jing; Gao, Ziyi; Yang, Min; Qin, Keyu; Yang, Xiaonan

    2016-01-01

    Global climate and environmental changes are endangering global water resources; and several approaches have been tested to manage and reduce the pressure on these decreasing resources. This study uses the case study of Xi’an City in China to test reasonable and effective methods to address water resource shortages. The study generated a framework combining ecosystem services and water resource management. Seven ecosystem indicators were classified as supply services, regulating services, or cultural services. Index values for each indicator were calculated, and based on questionnaire results, each index’s weight was calculated. Using the Likert method, we calculated ecosystem service supplies in every region of the city. We found that the ecosystem’s service capability is closely related to water resources, providing a method for managing water resources. Using Xi’an City as an example, we apply the ecosystem services concept to water resources management, providing a method for decision makers. PMID:27886137

  6. Obesity interacts with infectious mononucleosis in risk of multiple sclerosis.

    PubMed

    Hedström, A K; Lima Bomfim, I; Hillert, J; Olsson, T; Alfredsson, L

    2015-03-01

    The possible interaction between adolescent obesity and past infectious mononucleosis (IM) was investigated with regard to multiple sclerosis (MS) risk. This report is based on two population-based case-control studies, one with incident cases (1780 cases, 3885 controls) and one with prevalent cases (4502 cases, 4039 controls). Subjects were categorized based on adolescent body mass index (BMI) and past IM and compared with regard to occurrence of MS by calculating odds ratios with 95% confidence intervals (CIs) employing logistic regression. A potential interaction between adolescent BMI and past IM was evaluated by calculating the attributable proportion due to interaction. Regardless of human leukocyte antigen (HLA) status, a substantial interaction was observed between adolescent obesity and past IM with regard to MS risk. The interaction was most evident when IM after the age of 10 was considered (attributable proportion due to interaction 0.8, 95% CI 0.6-1.0 in the incident study, and attributable proportion due to interaction 0.7, 95% CI 0.5-1.0 in the prevalent study). In the incident study, the odds ratio of MS was 14.7 (95% CI 5.9-36.6) amongst subjects with adolescent obesity and past IM after the age of 10, compared with subjects with none of these exposures. The corresponding odds ratio in the prevalent study was 13.2 (95% CI 5.2-33.6). An obese state both impacts the cellular immune response to infections and induces a state of chronic immune-mediated inflammation which may contribute to explain our finding of an interaction between adolescent BMI and past IM. Measures taken against adolescent obesity may thus be a preventive strategy against MS. © 2014 The Authors. European Journal of Neurology published by John Wiley & Sons Ltd on behalf of European Academy of Neurology.

  7. Post—September 11, 2001, Incidence of Systemic Autoimmune Diseases in World Trade Center—Exposed Firefighters and Emergency Medical Service Workers

    PubMed Central

    Webber, Mayris P.; Moir, William; Crowson, Cynthia S.; Cohen, Hillel W.; Zeig-Owens, Rachel; Hall, Charles B.; Berman, Jessica; Qayyum, Basit; Jaber, Nadia; Matteson, Eric L.; Liu, Yang; Kelly, Kerry; Prezant, David J.

    2016-01-01

    Objective To estimate the incidence of selected systemic autoimmune diseases (SAIDs) in approximately 14,000 male rescue/recovery workers enrolled in the Fire Department of the City of New York (FDNY) World Trade Center (WTC) Health Program and to compare FDNY incidence to rates from demographically similar men in the Rochester Epidemiology Project (REP), a population-based database in Olmsted County, Minnesota. Patients and Methods We calculated incidence for specific SAIDs (rheumatoid arthritis, psoriatic arthritis, systemic lupus erythematosus, and others) and combined SAIDs diagnosed from September 12, 2001, through September 11, 2014, and generated expected sex- and age-specific rates based on REP rates. Rates were stratified by level of WTC exposure (higher vs lower). Standardized incidence ratios (SIRs), which are the ratios of the observed number of cases in the FDNY group to the expected number of cases based on REP rates, and 95% CIs were calculated. Results We identified 97 SAID cases. Overall, FDNY rates were not significantly different from expected rates (SIR, 0.97; 95% CI, 0.77–1.21). However, the lower WTC exposure group had 9.9 fewer cases than expected, whereas the higher WTC exposure group had 7.7 excess cases. Conclusion Most studies indicate that the healthy worker effect reduces the association between exposure and outcome by about 20%, which we observed in the lower WTC exposure group. Overall rates masked differences in incidence by level of WTC exposure, especially because the higher WTC exposure group was relatively small. Continued surveillance for early detection of SAIDs in high WTC exposure populations is required to identify and treat exposure-related adverse effects. PMID:26682920

  8. Post-September 11, 2001, Incidence of Systemic Autoimmune Diseases in World Trade Center-Exposed Firefighters and Emergency Medical Service Workers.

    PubMed

    Webber, Mayris P; Moir, William; Crowson, Cynthia S; Cohen, Hillel W; Zeig-Owens, Rachel; Hall, Charles B; Berman, Jessica; Qayyum, Basit; Jaber, Nadia; Matteson, Eric L; Liu, Yang; Kelly, Kerry; Prezant, David J

    2016-01-01

    To estimate the incidence of selected systemic autoimmune diseases (SAIDs) in approximately 14,000 male rescue/recovery workers enrolled in the Fire Department of the City of New York (FDNY) World Trade Center (WTC) Health Program and to compare FDNY incidence to rates from demographically similar men in the Rochester Epidemiology Project (REP), a population-based database in Olmsted County, Minnesota. We calculated incidence for specific SAIDs (rheumatoid arthritis, psoriatic arthritis, systemic lupus erythematosus, and others) and combined SAIDs diagnosed from September 12, 2001, through September 11, 2014, and generated expected sex- and age-specific rates based on REP rates. Rates were stratified by level of WTC exposure (higher vs lower). Standardized incidence ratios (SIRs), which are the ratios of the observed number of cases in the FDNY group to the expected number of cases based on REP rates, and 95% CIs were calculated. We identified 97 SAID cases. Overall, FDNY rates were not significantly different from expected rates (SIR, 0.97; 95% CI, 0.77-1.21). However, the lower WTC exposure group had 9.9 fewer cases than expected, whereas the higher WTC exposure group had 7.7 excess cases. Most studies indicate that the healthy worker effect reduces the association between exposure and outcome by about 20%, which we observed in the lower WTC exposure group. Overall rates masked differences in incidence by level of WTC exposure, especially because the higher WTC exposure group was relatively small. Continued surveillance for early detection of SAIDs in high WTC exposure populations is required to identify and treat exposure-related adverse effects. Copyright © 2016. Published by Elsevier Inc.

  9. Predicting Bond Dissociation Energies of Transition-Metal Compounds by Multiconfiguration Pair-Density Functional Theory and Second-Order Perturbation Theory Based on Correlated Participating Orbitals and Separated Pairs.

    PubMed

    Bao, Junwei Lucas; Odoh, Samuel O; Gagliardi, Laura; Truhlar, Donald G

    2017-02-14

    We study the performance of multiconfiguration pair-density functional theory (MC-PDFT) and multireference perturbation theory for the computation of the bond dissociation energies in 12 transition-metal-containing diatomic molecules and three small transition-metal-containing polyatomic molecules and in two transition-metal dimers. The first step is a multiconfiguration self-consistent-field calculation, for which two choices must be made: (i) the active space and (ii) its partition into subspaces, if the generalized active space formulation is used. In the present work, the active space is chosen systematically by using three correlated-participating-orbitals (CPO) schemes, and the partition is chosen by using the separated-pair (SP) approximation. Our calculations show that MC-PDFT generally has similar accuracy to CASPT2, and the active-space dependence of MC-PDFT is not very great for transition-metal-ligand bond dissociation energies. We also find that the SP approximation works very well, and in particular SP with the fully translated BLYP functional SP-ftBLYP is more accurate than CASPT2. SP greatly reduces the number of configuration state functions relative to CASSCF. For the cases of FeO and NiO with extended-CPO active space, for which complete active space calculations are unaffordable, SP calculations are not only affordable but also of satisfactory accuracy. All of the MC-PDFT results are significantly better than the corresponding results with broken-symmetry spin-unrestricted Kohn-Sham density functional theory. Finally we test a perturbation theory method based on the SP reference and find that it performs slightly worse than CASPT2 calculations, and for most cases of the nominal-CPO active space, the approximate SP perturbation theory calculations are less accurate than the much less expensive SP-PDFT calculations.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwai, P; Lins, L Nadler

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT ormore » IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.« less

  11. Estimation of the National Disease Burden of Influenza-Associated Severe Acute Respiratory Illness in Kenya and Guatemala: A Novel Methodology

    PubMed Central

    Katz, Mark A.; Lindblade, Kim A.; Njuguna, Henry; Arvelo, Wences; Khagayi, Sammy; Emukule, Gideon; Linares-Perez, Nivaldo; McCracken, John; Nokes, D. James; Ngama, Mwanajuma; Kazungu, Sidi; Mott, Joshua A.; Olsen, Sonja J.; Widdowson, Marc-Alain; Feikin, Daniel R.

    2013-01-01

    Background Knowing the national disease burden of severe influenza in low-income countries can inform policy decisions around influenza treatment and prevention. We present a novel methodology using locally generated data for estimating this burden. Methods and Findings This method begins with calculating the hospitalized severe acute respiratory illness (SARI) incidence for children <5 years old and persons ≥5 years old from population-based surveillance in one province. This base rate of SARI is then adjusted for each province based on the prevalence of risk factors and healthcare-seeking behavior. The percentage of SARI with influenza virus detected is determined from provincial-level sentinel surveillance and applied to the adjusted provincial rates of hospitalized SARI. Healthcare-seeking data from healthcare utilization surveys is used to estimate non-hospitalized influenza-associated SARI. Rates of hospitalized and non-hospitalized influenza-associated SARI are applied to census data to calculate the national number of cases. The method was field-tested in Kenya, and validated in Guatemala, using data from August 2009–July 2011. In Kenya (2009 population 38.6 million persons), the annual number of hospitalized influenza-associated SARI cases ranged from 17,129–27,659 for children <5 years old (2.9–4.7 per 1,000 persons) and 6,882–7,836 for persons ≥5 years old (0.21–0.24 per 1,000 persons), depending on year and base rate used. In Guatemala (2011 population 14.7 million persons), the annual number of hospitalized cases of influenza-associated pneumonia ranged from 1,065–2,259 (0.5–1.0 per 1,000 persons) among children <5 years old and 779–2,252 cases (0.1–0.2 per 1,000 persons) for persons ≥5 years old, depending on year and base rate used. In both countries, the number of non-hospitalized influenza-associated cases was several-fold higher than the hospitalized cases. Conclusions Influenza virus was associated with a substantial amount of severe disease in Kenya and Guatemala. This method can be performed in most low and lower-middle income countries. PMID:23573177

  12. Case mix measures and diagnosis-related groups: opportunities and threats for inpatient dermatology.

    PubMed

    Hensen, P; Fürstenberg, T; Luger, T A; Steinhoff, M; Roeder, N

    2005-09-01

    The changing healthcare environment world-wide is leading to extensive use of per case payment systems based on diagnosis-related groups (DRG). The aim of this study was to examine the impact of application of different DRG systems used in the German healthcare system. We retrospectively analysed 2334 clinical data sets of inpatients discharged from an academic dermatological inpatient unit in 2003. Data were regarded as providing high coding quality in compliance with the diagnosis and procedure classifications as well as coding standards. The application of the Australian AR-DRG version 4.1, the German G-DRG version 1.0, and the German G-DRG version 2004 was considered in detail. To evaluate more specific aspects, data were broken down into 11 groups based on the principle diagnosis. DRG cost weights and case mix index were used to compare coverage of inpatient dermatological services. Economic impacts were illustrated by case mix volumes and calculation of DRG payments. Case mix index results and the pending prospective revenues vary tremendously from the application of one or another of the DRG systems. The G-DRG version 2004 provides increased levels of case mix index that encourages, in particular, medical dermatology. The AR-DRG version 4.1 and the first German DRG version 1.0 appear to be less suitable to adequately cover inpatient dermatology. The G-DRG version 2004 has been greatly improved, probably due to proceeding calculation standards and DRG adjustments. The future of inpatient dermatology is subject to appropriate depiction of well-established treatment standards.

  13. Validation of ACG Case-mix for equitable resource allocation in Swedish primary health care.

    PubMed

    Zielinski, Andrzej; Kronogård, Maria; Lenhoff, Håkan; Halling, Anders

    2009-09-18

    Adequate resource allocation is an important factor to ensure equity in health care. Previous reimbursement models have been based on age, gender and socioeconomic factors. An explanatory model based on individual need of primary health care (PHC) has not yet been used in Sweden to allocate resources. The aim of this study was to examine to what extent the ACG case-mix system could explain concurrent costs in Swedish PHC. Diagnoses were obtained from electronic PHC records of inhabitants in Blekinge County (approx. 150,000) listed with public PHC (approx. 120,000) for three consecutive years, 2004-2006. The inhabitants were then classified into six different resource utilization bands (RUB) using the ACG case-mix system. The mean costs for primary health care were calculated for each RUB and year. Using linear regression models and log-cost as dependent variable the adjusted R2 was calculated in the unadjusted model (gender) and in consecutive models where age, listing with specific PHC and RUB were added. In an additional model the ACG groups were added. Gender, age and listing with specific PHC explained 14.48-14.88% of the variance in individual costs for PHC. By also adding information on level of co-morbidity, as measured by the ACG case-mix system, to specific PHC the adjusted R2 increased to 60.89-63.41%. The ACG case-mix system explains patient costs in primary care to a high degree. Age and gender are important explanatory factors, but most of the variance in concurrent patient costs was explained by the ACG case-mix system.

  14. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)].

    PubMed

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-07

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  15. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  16. The probability of misassociation between neighboring targets

    NASA Astrophysics Data System (ADS)

    Areta, Javier A.; Bar-Shalom, Yaakov; Rothrock, Ronald

    2008-04-01

    This paper presents procedures to calculate the probability that the measurement originating from an extraneous target will be (mis)associated with a target of interest for the cases of Nearest Neighbor and Global association. It is shown that these misassociation probabilities depend, under certain assumptions, on a particular - covariance weighted - norm of the difference between the targets' predicted measurements. For the Nearest Neighbor association, the exact solution, obtained for the case of equal innovation covariances, is based on a noncentral chi-square distribution. An approximate solution is also presented for the case of unequal innovation covariances. For the Global case an approximation is presented for the case of "similar" innovation covariances. In the general case of unequal innovation covariances where this approximation fails, an exact method based on the inversion of the characteristic function is presented. The theoretical results, confirmed by Monte Carlo simulations, quantify the benefit of Global vs. Nearest Neighbor association. These results are applied to problems of single sensor as well as centralized fusion architecture multiple sensor tracking.

  17. Electron dose distributions caused by the contact-type metallic eye shield: Studies using Monte Carlo and pencil beam algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Sei-Kwon; Yoon, Jai-Woong; Hwang, Taejin

    A metallic contact eye shield has sometimes been used for eyelid treatment, but dose distribution has never been reported for a patient case. This study aimed to show the shield-incorporated CT-based dose distribution using the Pinnacle system and Monte Carlo (MC) calculation for 3 patient cases. For the artifact-free CT scan, an acrylic shield machined as the same size as that of the tungsten shield was used. For the MC calculation, BEAMnrc and DOSXYZnrc were used for the 6-MeV electron beam of the Varian 21EX, in which information for the tungsten, stainless steel, and aluminum material for the eye shieldmore » was used. The same plan was generated on the Pinnacle system and both were compared. The use of the acrylic shield produced clear CT images, enabling delineation of the regions of interest, and yielded CT-based dose calculation for the metallic shield. Both the MC and the Pinnacle systems showed a similar dose distribution downstream of the eye shield, reflecting the blocking effect of the metallic eye shield. The major difference between the MC and the Pinnacle results was the target eyelid dose upstream of the shield such that the Pinnacle system underestimated the dose by 19 to 28% and 11 to 18% for the maximum and the mean doses, respectively. The pattern of dose difference between the MC and the Pinnacle systems was similar to that in the previous phantom study. In conclusion, the metallic eye shield was successfully incorporated into the CT-based planning, and the accurate dose calculation requires MC simulation.« less

  18. Research on the influence of parking charging strategy based on multi-level extension theory of group decision making

    NASA Astrophysics Data System (ADS)

    Cheng, Fen; Hu, Wanxin

    2017-05-01

    Based on analysis of the impact of the experience of parking policy at home and abroad, design the impact analysis process of parking strategy. First, using group decision theory to create a parking strategy index system and calculate its weight. Index system includes government, parking operators and travelers. Then, use a multi-level extension theory to analyze the CBD parking strategy. Assess the parking strategy by calculating the correlation of each indicator. Finally, assess the strategy of parking charges through a case. Provide a scientific and reasonable basis for assessing parking strategy. The results showed that the model can effectively analyze multi-target, multi-property parking policy evaluation.

  19. Job requirements compared to dental school education: impact of a case-based learning curriculum.

    PubMed

    Keeve, Philip L; Gerhards, Ute; Arnold, Wolfgang A; Zimmer, Stefan; Zöllner, Axel

    2012-01-01

    Case-based learning (CBL) is suggested as a key educational method of knowledge acquisition to improve dental education. The purpose of this study was to assess graduates from a patient-oriented, case-based learning (CBL)-based curriculum as regards to key competencies required at their professional activity. 407 graduates from a patient-oriented, case-based learning (CBL) dental curriculum who graduated between 1990 and 2006 were eligible for this study. 404 graduates were contacted between 2007 and 2008 to self-assess nine competencies as required at their day-to-day work and as taught in dental school on a 6-point Likert scale. Baseline demographics and clinical characteristics were presented as mean ± standard deviation (SD) for continuous variables. To determine whether dental education sufficiently covers the job requirements of physicians, we calculated the mean difference ∆ between the ratings of competencies as required in day-to-day work and as taught in medical school by subtracting those from each other (negative mean difference ∆ indicates deficit; positive mean difference ∆ indicates surplus). Spearman's rank correlation coefficient was calculated to reveal statistical significance (statistical significance p<0.05). 41.6% recipients of the questionnaire responded (n=168 graduates). A homogeneous distribution quantity of the graduate groups concerning gender, graduation date, professional experience and average examination grade was achieved.Comparing competencies required at work and taught in medical school, CBL was associated with benefits in "Research competence" (∆+0.6) "Interdisciplinary thinking" (∆+0.47), "Dental medical knowledge" (∆+0.43), "Practical dental skills" (∆+0.21), "Team work" (∆+0.16) and "Independent learning/working" (∆+0.08), whereas "Problem-solving skills" (∆-0.07), "Psycho-social competence" (∆-0.66) and "Business competence" (∆-2.86) needed improvement in the CBL-based curriculum. CBL demonstrated benefits with regard to competencies which were highly required in the job of dentists. Psycho-social and business competence deserve closer attention in future curricular development.

  20. Schwinger-Keldysh diagrammatics for primordial perturbations

    NASA Astrophysics Data System (ADS)

    Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi

    2017-12-01

    We present a systematic introduction to the diagrammatic method for practical calculations in inflationary cosmology, based on Schwinger-Keldysh path integral formalism. We show in particular that the diagrammatic rules can be derived directly from a classical Lagrangian even in the presence of derivative couplings. Furthermore, we use a quasi-single-field inflation model as an example to show how this formalism, combined with the trick of mixed propagator, can significantly simplify the calculation of some in-in correlation functions. The resulting bispectrum includes the lighter scalar case (m<3H/2) that has been previously studied, and the heavier scalar case (m>3H/2) that has not been explicitly computed for this model. The latter provides a concrete example of quantum primordial standard clocks, in which the clock signals can be observably large.

  1. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  2. [Interpretation of false positive results of biochemical prenatal tests].

    PubMed

    Sieroszewski, Piotr; Słowakiewicz, Katarzyna; Perenc, Małgorzata

    2010-03-01

    Modern, non-invasive prenatal diagnostics based on biochemical and ultrasonographic markers of fetal defects allows us to calculate the risk of fetal chromosomal aneuploidies with high sensitivity and specificity An introduction of biochemical, non-invasive prenatal tests turned out to result in frequent false positive results of these tests in cases when invasive diagnostics does not confirm fetal defects. However prospective analysis of these cases showed numerous complications in the third trimester of the pregnancies.

  3. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.

  4. Does ℏ play a role in multidimensional spectroscopy? Reduced hierarchy equations of motion approach to molecular vibrations.

    PubMed

    Sakurai, Atsunori; Tanimura, Yoshitaka

    2011-04-28

    To investigate the role of quantum effects in vibrational spectroscopies, we have carried out numerically exact calculations of linear and nonlinear response functions for an anharmonic potential system nonlinearly coupled to a harmonic oscillator bath. Although one cannot carry out the quantum calculations of the response functions with full molecular dynamics (MD) simulations for a realistic system which consists of many molecules, it is possible to grasp the essence of the quantum effects on the vibrational spectra by employing a model Hamiltonian that describes an intra- or intermolecular vibrational motion in a condensed phase. The present model fully includes vibrational relaxation, while the stochastic model often used to simulate infrared spectra does not. We have employed the reduced quantum hierarchy equations of motion approach in the Wigner space representation to deal with nonperturbative, non-Markovian, and nonsecular system-bath interactions. Taking the classical limit of the hierarchy equations of motion, we have obtained the classical equations of motion that describe the classical dynamics under the same physical conditions as in the quantum case. By comparing the classical and quantum mechanically calculated linear and multidimensional spectra, we found that the profiles of spectra for a fast modulation case were similar, but different for a slow modulation case. In both the classical and quantum cases, we identified the resonant oscillation peak in the spectra, but the quantum peak shifted to the red compared with the classical one if the potential is anharmonic. The prominent quantum effect is the 1-2 transition peak, which appears only in the quantum mechanically calculated spectra as a result of anharmonicity in the potential or nonlinearity of the system-bath coupling. While the contribution of the 1-2 transition is negligible in the fast modulation case, it becomes important in the slow modulation case as long as the amplitude of the frequency fluctuation is small. Thus, we observed a distinct difference between the classical and quantum mechanically calculated multidimensional spectra in the slow modulation case where spectral diffusion plays a role. This fact indicates that one may not reproduce the experimentally obtained multidimensional spectrum for high-frequency vibrational modes based on classical molecular dynamics simulations if the modulation that arises from surrounding molecules is weak and slow. A practical way to overcome the difference between the classical and quantum simulations was discussed.

  5. Cost-effectiveness in fall prevention for older women.

    PubMed

    Hektoen, Liv F; Aas, Eline; Lurås, Hilde

    2009-08-01

    The aim of this study was to estimate the cost-effectiveness of implementing an exercise-based fall prevention programme for home-dwelling women in the > or = 80-year age group in Norway. The impact of the home-based individual exercise programme on the number of falls is based on a New Zealand study. On the basis of the cost estimates and the estimated reduction in the number of falls obtained with the chosen programme, we calculated the incremental costs and the incremental effect of the exercise programme as compared with no prevention. The calculation of the average healthcare cost of falling was based on assumptions regarding the distribution of fall injuries reported in the literature, four constructed representative case histories, assumptions regarding healthcare provision associated with the treatment of the specified cases, and estimated unit costs from Norwegian cost data. We calculated the average healthcare costs per fall for the first year. We found that the reduction in healthcare costs per individual for treating fall-related injuries was 1.85 times higher than the cost of implementing a fall prevention programme. The reduction in healthcare costs more than offset the cost of the prevention programme for women aged > or = 80 years living at home, which indicates that health authorities should increase their focus on prevention. The main intention of this article is to stipulate costs connected to falls among the elderly in a transparent way and visualize the whole cost picture. Cost-effectiveness analysis is a health policy tool that makes politicians and other makers of health policy conscious of this complexity.

  6. Unsupervised Calculation of Free Energy Barriers in Large Crystalline Systems

    NASA Astrophysics Data System (ADS)

    Swinburne, Thomas D.; Marinica, Mihai-Cosmin

    2018-03-01

    The calculation of free energy differences for thermally activated mechanisms in the solid state are routinely hindered by the inability to define a set of collective variable functions that accurately describe the mechanism under study. Even when possible, the requirement of descriptors for each mechanism under study prevents implementation of free energy calculations in the growing range of automated material simulation schemes. We provide a solution, deriving a path-based, exact expression for free energy differences in the solid state which does not require a converged reaction pathway, collective variable functions, Gram matrix evaluations, or probability flux-based estimators. The generality and efficiency of our method is demonstrated on a complex transformation of C 15 interstitial defects in iron and double kink nucleation on a screw dislocation in tungsten, the latter system consisting of more than 120 000 atoms. Both cases exhibit significant anharmonicity under experimentally relevant temperatures.

  7. Modelling crystal plasticity by 3D dislocation dynamics and the finite element method: The Discrete-Continuous Model revisited

    NASA Astrophysics Data System (ADS)

    Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.

    2014-02-01

    A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.

  8. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  9. Use of Computer-Generated Holograms in Security Hologram Applications

    NASA Astrophysics Data System (ADS)

    Bulanovs, A.; Bakanas, R.

    2016-10-01

    The article discusses the use of computer-generated holograms (CGHs) for the application as one of the security features in the relief-phase protective holograms. An improved method of calculating CGHs is presented, based on ray-tracing approach in the case of interference of parallel rays. Software is developed for the calculation of multilevel phase CGHs and their integration in the application of security holograms. Topology of calculated computer-generated phase holograms was recorded on the photoresist by the optical greyscale lithography. Parameters of the recorded microstructures were investigated with the help of the atomic-force microscopy (AFM) and scanning electron microscopy (SEM) methods. The results of the research have shown highly protective properties of the security elements based on CGH microstructures. In our opinion, a wide use of CGHs is very promising in the structure of complex security holograms for increasing the level of protection against counterfeit.

  10. Selection bias due to differential participation in a case-control study of mobile phone use and brain tumors.

    PubMed

    Lahkola, Anna; Salminen, Tiina; Auvinen, Anssi

    2005-05-01

    To evaluate the possible selection bias related to the differential participation of mobile phone users and non-users in a Finnish case-control study on mobile phone use and brain tumors. Mobile phone use was investigated among 777 controls and 726 cases participating in the full personal interview (full participants), and 321 controls and 103 cases giving only a brief phone interview (incomplete participants). To assess selection bias, the Mantel-Haenszel estimate of odds ratio was calculated for three different groups: full study participants, incomplete participants, and a combined group consisting of both full and incomplete participants. Among controls, 83% of the full participants and 73% of the incomplete participants had regularly used a mobile phone. Among cases, the figures were 76% and 64%, respectively. The odds ratio for brain tumor based on the combined group of full and incomplete participants was slightly closer to unity than that based only on the full participants. Selection bias tends to distort the effect estimates below unity, while analyses based on more comprehensive material gave results close to unity.

  11. Applying cost accounting to operating room staffing in otolaryngology: time-driven activity-based costing and outpatient adenotonsillectomy.

    PubMed

    Balakrishnan, Karthik; Goico, Brian; Arjmand, Ellis M

    2015-04-01

    (1) To describe the application of a detailed cost-accounting method (time-driven activity-cased costing) to operating room personnel costs, avoiding the proxy use of hospital and provider charges. (2) To model potential cost efficiencies using different staffing models with the case study of outpatient adenotonsillectomy. Prospective cost analysis case study. Tertiary pediatric hospital. All otolaryngology providers and otolaryngology operating room staff at our institution. Time-driven activity-based costing demonstrated precise per-case and per-minute calculation of personnel costs. We identified several areas of unused personnel capacity in a basic staffing model. Per-case personnel costs decreased by 23.2% by allowing a surgeon to run 2 operating rooms, despite doubling all other staff. Further cost reductions up to a total of 26.4% were predicted with additional staffing rearrangements. Time-driven activity-based costing allows detailed understanding of not only personnel costs but also how personnel time is used. This in turn allows testing of alternative staffing models to decrease unused personnel capacity and increase efficiency. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  12. Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis

    NASA Astrophysics Data System (ADS)

    Mills, D. A.

    2017-10-01

    In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.

  13. Efficient Procedure for the Numerical Calculation of Harmonic Vibrational Frequencies Based on Internal Coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miliordos, Evangelos; Xantheas, Sotiris S.

    We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding numbermore » using double differentiation in Cartesian coordinates. For molecules of C 1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm –1 from those obtained from Cartesian coordinates.« less

  14. [Algorithm for estimating chlorophyll-a concentration in case II water body based on bio-optical model].

    PubMed

    Yang, Wei; Chen, Jin; Mausushita, Bunki

    2009-01-01

    In the present study, a novel retrieval method for estimating chlorophyll-a concentration in case II waters based on bio-optical model was proposed and was tested with the data measured in the laboratory. A series of reflectance spectra, with which the concentration of each sample constituent (for example chlorophyll-a, NPSS etc.) was obtained from accurate experiments, were used to calculate the absorption and backscattering coefficients of the constituents of the case II waters. Then non-negative least square method was applied to calculate the concentration of chlorophyll-a and non-phytoplankton suspended sediments (NPSS). Green algae was firstly collected from the Kasumigaura lake in Japan and then cultured in the laboratory. The reflectance spectra of waters with different amounts of phytoplankton and NPSS were measured in the dark room using FieldSpec Pro VNIR (Analytical Spectral Devises Inc. , Boulder, CO, USA). In order to validate whether this method can be applied in multispectral data (for example Landsat TM), the spectra measured in the laboratory were resampled with Landsat TM bands 1, 2, 3 and 4. Different combinations of TM bands were compared to derive the most appropriate wavelength for detecting chlorophyll-a in case II water for green algae. The results indicated that the combination of TM bands 2, 3 and 4 achieved much better accuracy than other combinations, and the estimated concentration of chlorophyll-a was significantly more accurate than empirical methods. It is expected that this method can be directly applied to the real remotely sensed image because it is based on bio-optical model.

  15. Classical calculation of the equilibrium constants for true bound dimers using complete potential energy surface.

    PubMed

    Buryak, Ilya; Vigasin, Andrey A

    2015-12-21

    The present paper aims at deriving classical expressions which permit calculation of the equilibrium constant for weakly interacting molecular pairs using a complete multidimensional potential energy surface. The latter is often available nowadays as a result of the more and more sophisticated and accurate ab initio calculations. The water dimer formation is considered as an example. It is shown that even in case of a rather strongly bound dimer the suggested expression permits obtaining quite reliable estimate for the equilibrium constant. The reliability of our obtained water dimer equilibrium constant is briefly discussed by comparison with the available data based on experimental observations, quantum calculations, and the use of RRHO approximation, provided the latter is restricted to formation of true bound states only.

  16. S -matrix calculations of energy levels of sodiumlike ions

    DOE PAGES

    Sapirstein, J.; Cheng, K. T.

    2015-06-24

    A recent S -matrix-based QED calculation of energy levels of the lithium isoelectronic sequence is extended to the general case of a valence electron outside an arbitrary filled core. Emphasis is placed on modifications of the lithiumlike formulas required because more than one core state is present, and an unusual feature of the two-photon exchange contribution involving autoionizing states is discussed. Here, the method is illustrated with a calculation of the energy levels of sodiumlike ions, with results for 3s 1/2, 3p 1/2, and 3p 3/2 energies tabulated for the range Z = 30 – 100 . Comparison with experimentmore » and other calculations is given, and prospects for extension of the method to ions with more complex electronic structure discussed.« less

  17. Calculation of surface enthalpy of solids from an ab initio electronegativity based model: case of ice.

    PubMed

    Douillard, J M; Henry, M

    2003-07-15

    A very simple route to calculation of the surface energy of solids is proposed because this value is very difficult to determine experimentally. The first step is the calculation of the attractive part of the electrostatic energy of crystals. The partial charges used in this calculation are obtained by using electronegativity equalization and scales of electronegativity and hardness deduced from physical characteristics of the atom. The lattice energies of the infinite crystal and of semi-infinite layers are then compared. The difference is related to the energy of cohesion and then to the surface energy. Very good results are obtained with ice, if one compares with the surface energy of liquid water, which is generally considered a good approximation of the surface energy of ice.

  18. Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Potapczuk, Mark G.

    1993-01-01

    A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.

  19. A first-principle calculation of the XANES spectrum of Cu2+ in water

    NASA Astrophysics Data System (ADS)

    La Penna, G.; Minicozzi, V.; Morante, S.; Rossi, G. C.; Stellato, F.

    2015-09-01

    The progress in high performance computing we are witnessing today offers the possibility of accurate electron density calculations of systems in realistic physico-chemical conditions. In this paper, we present a strategy aimed at performing a first-principle computation of the low energy part of the X-ray Absorption Spectroscopy (XAS) spectrum based on the density functional theory calculation of the electronic potential. To test its effectiveness, we apply the method to the computation of the X-ray absorption near edge structure part of the XAS spectrum in the paradigmatic, but simple case of Cu2+ in water. In order to keep into account the effect of the metal site structure fluctuations in determining the experimental signal, the theoretical spectrum is evaluated as the average over the computed spectra of a statistically significant number of simulated metal site configurations. The comparison of experimental data with theoretical calculations suggests that Cu2+ lives preferentially in a square-pyramidal geometry. The remarkable success of this approach in the interpretation of XAS data makes us optimistic about the possibility of extending the computational strategy we have outlined to the more interesting case of molecules of biological relevance bound to transition metal ions.

  20. Structural response to discrete and continuous gusts of an airplane having wing bending flexibility and a correlation of calculated and flight results

    NASA Technical Reports Server (NTRS)

    Houbolt, John C; Kordes, Eldon E

    1954-01-01

    An analysis is made of the structural response to gusts of an airplane having the degrees of freedom of vertical motion and wing bending flexibility and basic parameters are established. A convenient and accurate numerical solution of the response equations is developed for the case of discrete-gust encounter, an exact solution is made for the simpler case of continuous-sinusoidal-gust encounter, and the procedure is outlined for treating the more realistic condition of continuous random atmospheric turbulence, based on the methods of generalized harmonic analysis. Correlation studies between flight and calculated results are then given to evaluate the influence of wing bending flexibility on the structural response to gusts of two twin-engine transports and one four-engine bomber. It is shown that calculated results obtained by means of a discrete-gust approach reveal the general nature of the flexibility effects and lead to qualitative correlation with flight results. In contrast, calculations by means of the continuous-turbulence approach show good quantitative correlation with flight results and indicate a much greater degree of resolution of the flexibility effects.

  1. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    NASA Astrophysics Data System (ADS)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  2. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  3. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  4. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  5. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  6. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  7. Software GOLUCA: Knowledge Representation in Mental Calculation

    ERIC Educational Resources Information Center

    Casas-Garcia, Luis M.; Luengo-Gonzalez, Ricardo; Godinho-Lopes, Vitor

    2011-01-01

    We present a new software, called Goluca (Godinho, Luengo, and Casas, 2007), based on the technique of Pathfinder Associative Networks (Schvaneveldt, 1989), which produces graphical representations of the cognitive structure of individuals in a given field knowledge. In this case, we studied the strategies used by teachers and its relationship…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reducemore » the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. Conclusions: A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.« less

  9. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  10. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  11. MrGrid: A Portable Grid Based Molecular Replacement Pipeline

    PubMed Central

    Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.

    2010-01-01

    Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612

  12. Computer Programs for Calculating and Plotting the Stability Characteristics of a Balloon Tethered in a Wind

    NASA Technical Reports Server (NTRS)

    Bennett, R. M.; Bland, S. R.; Redd, L. T.

    1973-01-01

    Computer programs for calculating the stability characteristics of a balloon tethered in a steady wind are presented. Equilibrium conditions, characteristic roots, and modal ratios are calculated for a range of discrete values of velocity for a fixed tether-line length. Separate programs are used: (1) to calculate longitudinal stability characteristics, (2) to calculate lateral stability characteristics, (3) to plot the characteristic roots versus velocity, (4) to plot the characteristic roots in root-locus form, (5) to plot the longitudinal modes of motion, and (6) to plot the lateral modes for motion. The basic equations, program listings, and the input and output data for sample cases are presented, with a brief discussion of the overall operation and limitations. The programs are based on a linearized, stability-derivative type of analysis, including balloon aerodynamics, apparent mass, buoyancy effects, and static forces which result from the tether line.

  13. Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort

    PubMed Central

    Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang

    2017-01-01

    Purpose We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials and Methods Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. Conclusions KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. PMID:28046017

  14. Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort.

    PubMed

    Park, Jae Young; Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang; Byun, Seok-Soo

    2017-01-01

    We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings.

  15. Comparing risk estimates following diagnostic CT radiation exposures employing different methodological approaches.

    PubMed

    Kashcheev, Valery V; Pryakhin, Evgeny A; Menyaylo, Alexander N; Chekin, Sergey Yu; Ivanov, Viktor K

    2014-06-01

    The current study has two aims: the first is to quantify the difference between radiation risks estimated with the use of organ or effective doses, particularly when planning pediatric and adult computed tomography (CT) examinations. The second aim is to determine the method of calculating organ doses and cancer risk using dose-length product (DLP) for typical routine CT examinations. In both cases, the radiation-induced cancer risks from medical CT examinations were evaluated as a function of gender and age. Lifetime attributable risk values from CT scanning were estimated with the use of ICRP (Publication 103) risk models and Russian national medical statistics data. For populations under the age of 50 y, the risk estimates based on organ doses usually are 30% higher than estimates based on effective doses. In older populations, the difference can be up to a factor of 2.5. The typical distributions of organ doses were defined for Chest Routine, Abdominal Routine, and Head Routine examinations. The distributions of organ doses were dependent on the anatomical region of scanning. The most exposed organs/tissues were thyroid, breast, esophagus, and lungs in cases of Chest Routine examination; liver, stomach, colon, ovaries, and bladder in cases of Abdominal Routine examination; and brain for Head Routine examinations. The conversion factors for calculation of typical organ doses or tissues at risk using DLP were determined. Lifetime attributable risk of cancer estimated with organ doses calculated from DLP was compared with the risk estimated on the basis of organ doses measured with the use of silicon photodiode dosimeters. The estimated difference in LAR is less than 29%.

  16. Separated-pair independent particle model and the generalized Brillouin theorem: ab initio calculations on the dissociation of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundberg, Kenneth Randall

    1976-01-01

    A method is developed to optimize the separated-pair independent particle (SPIP) wave function; it is a special case of the separated-pair theory obtained by using two-term natural expansions of the geminals. The orbitals are optimized by a theory based on the generalized Brillouin theorem and iterative configuration interaction (CI) calculations in the space of the SPIP function and its single excitations. The geminal expansion coefficients are optimized by serial 2 x 2 CI calculations. Formulas are derived for the matrix elements. An algorithm to implement the method is presented, and the work needed to evaluate the molecular integrals is discussed.

  17. Generalization of the Mulliken-Hush treatment for the calculation of electron transfer matrix elements

    NASA Astrophysics Data System (ADS)

    Cave, Robert J.; Newton, Marshall D.

    1996-01-01

    A new method for the calculation of the electronic coupling matrix element for electron transfer processes is introduced and results for several systems are presented. The method can be applied to ground and excited state systems and can be used in cases where several states interact strongly. Within the set of states chosen it is a non-perturbative treatment, and can be implemented using quantities obtained solely in terms of the adiabatic states. Several applications based on quantum chemical calculations are briefly presented. Finally, since quantities for adiabatic states are the only input to the method, it can also be used with purely experimental data to estimate electron transfer matrix elements.

  18. Gravimetric surveys for assessing rock mass condition around a mine shaft

    NASA Astrophysics Data System (ADS)

    Madej, Janusz

    2017-06-01

    The fundamentals of use of vertical gravimetric surveying method in mine shafts are presented in the paper. The methods of gravimetric measurements and calculation of interval and complex density are discussed in detail. The density calculations are based on an original method accounting for the gravity influence of the mine shaft thus guaranteeing closeness of calculated and real values of density of rocks beyond the shaft lining. The results of many gravimetric surveys performed in shafts are presented and interpreted. As a result, information about the location of heterogeneous zones of work beyond the shaft lining is obtained. In many cases, these zones used to threaten the safe operation of machines and utilities in the shaft.

  19. Truncated Sum Rules and Their Use in Calculating Fundamental Limits of Nonlinear Susceptibilities

    NASA Astrophysics Data System (ADS)

    Kuzyk, Mark G.

    Truncated sum rules have been used to calculate the fundamental limits of the nonlinear susceptibilities and the results have been consistent with all measured molecules. However, given that finite-state models appear to result in inconsistencies in the sum rules, it may seem unclear why the method works. In this paper, the assumptions inherent in the truncation process are discussed and arguments based on physical grounds are presented in support of using truncated sum rules in calculating fundamental limits. The clipped harmonic oscillator is used as an illustration of how the validity of truncation can be tested and several limiting cases are discussed as examples of the nuances inherent in the method.

  20. Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing

    NASA Astrophysics Data System (ADS)

    Tian, Q.; Fainman, Y.; Lee, Sing H.

    1989-02-01

    The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.

  1. Modelling the role of dietary habits and eating behaviours on the development of acute coronary syndrome or stroke: aims, design, and validation properties of a case-control study.

    PubMed

    Kastorini, Christina-Maria; Milionis, Haralampos J; Goudevenos, John A; Panagiotakos, Demosthenes B

    2010-09-14

    In this paper the methodology and procedures of a case-control study that will be developed for assessing the role of dietary habits and eating behaviours on the development of acute coronary syndrome and stroke is presented. Based on statistical power calculations, 1000 participants will be enrolled; of them, 250 will be consecutive patients with a first acute coronary event, 250 consecutive patients with a first ischaemic stroke, and 500 population-based healthy subjects (controls), age and sex matched to the cases. Socio-demographic, clinical, dietary, psychological, and other lifestyle characteristics will be measured. Dietary habits and eating behaviours will be evaluated with a special questionnaire that has been developed for the study.

  2. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses

    PubMed Central

    Faith, Daniel P.

    2015-01-01

    The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672

  3. Optimal charges in lead progression: a structure-based neuraminidase case study.

    PubMed

    Armstrong, Kathryn A; Tidor, Bruce; Cheng, Alan C

    2006-04-20

    Collective experience in structure-based lead progression has found electrostatic interactions to be more difficult to optimize than shape-based ones. A major reason for this is that the net electrostatic contribution observed includes a significant nonintuitive desolvation component in addition to the more intuitive intermolecular interaction component. To investigate whether knowledge of the ligand optimal charge distribution can facilitate more intuitive design of electrostatic interactions, we took a series of small-molecule influenza neuraminidase inhibitors with known protein cocrystal structures and calculated the difference between the optimal and actual charge distributions. This difference from the electrostatic optimum correlates with the calculated electrostatic contribution to binding (r(2) = 0.94) despite small changes in binding modes caused by chemical substitutions, suggesting that the optimal charge distribution is a useful design goal. Furthermore, detailed suggestions for chemical modification generated by this approach are in many cases consistent with observed improvements in binding affinity, and the method appears to be useful despite discrete chemical constraints. Taken together, these results suggest that charge optimization is useful in facilitating generation of compound ideas in lead optimization. Our results also provide insight into design of neuraminidase inhibitors.

  4. Remote control missile model test

    NASA Technical Reports Server (NTRS)

    Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.

    1989-01-01

    An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.

  5. [Can Topical Negative Pressure Therapy be Performed as a Cost-Effective General Surgery Procedure in the German DRG System?].

    PubMed

    Hirche, Z; Xiong, L; Hirche, C; Willis, S

    2016-04-01

    Topical negative pressure therapy (TNPT) has been established for surgical wound therapy with different indications. Nevertheless, there is only sparse evidence regarding its therapeutic superiority or cost-effectiveness in the German DRG system (G-DRG). This study was designed to analyse the cost-effectiveness of TNPT in the G-DRG system with a focus on daily treatment costs and reimbursement in a general surgery care setting. In this retrospective study, we included 176 patients, who underwent TNPT between 2007 and 2011 for general surgery indications. Analysis of the cost-effectiveness involved 149 patients who underwent a simulation to calculate the reimbursement with or without TNPT by a virtual control group in which the TNP procedure was withdrawn for DRG calculation. This was followed by a calculation of costs for wound dressings and TNPT rent and material costs. Comparison between the "true" and the virtual group enabled calculation of the effective remaining surplus per case. Total reimbursement by included TNPT cases was 2,323 ,70.04 €. Costs for wound dressings and TNPT rent were 102,669.20 €. In 41 cases there was a cost-effectiveness (27.5%) with 607,422.03 € with TNP treatment, while the control group without TNP generated revenues of 442,015.10 €. Costs for wound dressings and TNPT rent were 47,376.68 €. In the final account we could generate a cost-effectiveness of 6759 € in 5 years per 149 patients by TNPT. In 108 cases there was no cost-effectiveness (72.5%). TNPT applied in a representative general surgery setting allows for wound therapy without a major financial burden. Based on the costs for wound dressings and TNPT rent, a primarily medically based decision when to use TNPT can be performed in a balanced product cost accounting. This study does not analyse the superiority of TNPT in wound care, so further prospective studies are required which focus on therapeutic superiority and cost-effectiveness. Georg Thieme Verlag KG Stuttgart · New York.

  6. Elastic-Plastic Fracture Mechanics Analysis of Critical Flaw Size in ARES I-X Flange-to-Skin Welds

    NASA Technical Reports Server (NTRS)

    Chell, G. Graham; Hudak, Stephen J., Jr.

    2008-01-01

    NASA's Ares 1 Upper Stage Simulator (USS) is being fabricated from welded A516 steel. In order to insure the structural integrity of these welds it is of interest to calculate the critical initial flaw size (CIFS) to establish rational inspection requirements. The CIFS is in turn dependent on the critical final flaw size (CFS), as well as fatigue flaw growth resulting from transportation, handling and service-induced loading. These calculations were made using linear elastic fracture mechanics (LEFM), which are thought to be conservative because they are based on a lower bound, so called elastic, fracture toughness determined from tests that displayed significant plasticity. Nevertheless, there was still concern that the yield magnitude stresses generated in the flange-to-skin weld by the combination of axial stresses due to axial forces, fit-up stresses, and weld residual stresses, could give rise to significant flaw-tip plasticity, which might render the LEFM results to be non-conservative. The objective of the present study was to employ Elastic Plastic Fracture Mechanics (EPFM) to determine CFS values, and then compare these values to CFS values evaluated using LEFM. CFS values were calculated for twelve cases involving surface and embedded flaws, EPFM analyses with and without plastic shakedown of the stresses, LEFM analyses, and various welding residual stress distributions. For the cases examined, the computed CFS values based on elastic analyses were the smallest in all instances where the failures were predicted to be controlled by the fracture toughness. However, in certain cases, the CFS values predicted by the elastic-plastic analyses were smaller than those predicted by the elastic analyses; in these cases the failure criteria were determined by a breakdown in stress intensity factor validity limits for deep flaws (a greater than 0.90t), rather than by the fracture toughness. Plastic relaxation of stresses accompanying shakedown always increases the calculated CFS values compared to the CFS values determined without shakedown. Thus, it is conservative to ignore shakedown effects.

  7. An ab initio mechanism for efficient population of triplet states in cytotoxic sulfur substituted DNA bases: the case of 6-thioguanine.

    PubMed

    Martínez-Fernández, Lara; González, Leticia; Corral, Inés

    2012-02-18

    The deactivation mechanism of the cytotoxic 6-thioguanine, the 6-sulfur-substituted analogue of the canonical DNA base, is unveiled by ab initio calculations. Oxygen-by-sulfur substitution leads to efficient population of triplet states-the first step for generating singlet oxygen-which is responsible for its cytotoxicity. This journal is © The Royal Society of Chemistry 2012

  8. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  9. Viscous flow calculations for the AGARD standard configuration airfoils with experimental comparisons

    NASA Technical Reports Server (NTRS)

    Howlett, James T.

    1989-01-01

    Recent experience in calculating unsteady transonic flow by means of viscous-inviscid interactions with the XTRAN2L computer code is examined. The boundary layer method for attached flows is based upon the work of Rizzetta. The nonisentropic corrections of Fuglsang and Williams are also incorporated along with the viscous interaction for some cases and initial results are presented. For unsteady flows, the inverse boundary layer equations developed by Vatsa and Carter are used in a quasi-steady manner and preliminary results are presented.

  10. A computer program for helicopter rotor noise using Lowson's formula in the time domain

    NASA Technical Reports Server (NTRS)

    Parks, C. L.

    1975-01-01

    A computer program (D3910) was developed to calculate both the far field and near field acoustic pressure signature of a tilted rotor in hover or uniform forward speed. The analysis, carried out in the time domain, is based on Lowson's formulation of the acoustic field of a moving force. The digital computer program is described, including methods used in the calculations, a flow chart, program D3910 source listing, instructions for the user, and two test cases with input and output listings and output plots.

  11. Calculated momentum dependence of Zhang-Rice states in transition metal oxides.

    PubMed

    Yin, Quan; Gordienko, Alexey; Wan, Xiangang; Savrasov, Sergey Y

    2008-02-15

    Using a combination of local density functional theory and cluster exact diagonalization based dynamical mean field theory, we calculate many-body electronic structures of several Mott insulating oxides including undoped high T(c) materials. The dispersions of the lowest occupied electronic states are associated with the Zhang-Rice singlets in cuprates and with doublets, triplets, quadruplets, and quintets in more general cases. Our results agree with angle resolved photoemission experiments including the decrease of the spectral weight of the Zhang-Rice band as it approaches k=0.

  12. Effect of Boundary Conditions on the Axial Compression Buckling of Homogeneous Orthotropic Composite Cylinders in the Long Column Range

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Nemeth, Michael P.; Oremont, Leonard; Jegley, Dawn C.

    2011-01-01

    Buckling loads for long isotropic and laminated cylinders are calculated based on Euler, Fluegge and Donnell's equations. Results from these methods are presented using simple parameters useful for fundamental design work. Buckling loads for two types of simply supported boundary conditions are calculated using finite element methods for comparison to select cases of the closed form solution. Results indicate that relying on Donnell theory can result in an over-prediction of buckling loads by as much as 40% in isotropic materials.

  13. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  14. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  15. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  16. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  17. Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.

    PubMed

    Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R

    2000-07-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.

  18. The scenario-based generalization of radiation therapy margins.

    PubMed

    Fredriksson, Albin; Bokrantz, Rasmus

    2016-03-07

    We give a scenario-based treatment plan optimization formulation that is equivalent to planning with geometric margins if the scenario doses are calculated using the static dose cloud approximation. If the scenario doses are instead calculated more accurately, then our formulation provides a novel robust planning method that overcomes many of the difficulties associated with previous scenario-based robust planning methods. In particular, our method protects only against uncertainties that can occur in practice, it gives a sharp dose fall-off outside high dose regions, and it avoids underdosage of the target in 'easy' scenarios. The method shares the benefits of the previous scenario-based robust planning methods over geometric margins for applications where the static dose cloud approximation is inaccurate, such as irradiation with few fields and irradiation with ion beams. These properties are demonstrated on a suite of phantom cases planned for treatment with scanned proton beams subject to systematic setup uncertainty.

  19. A formula for calculating theoretical photoelectron fluxes resulting from the He/+/ 304 A solar spectral line

    NASA Technical Reports Server (NTRS)

    Richards, P. G.; Torr, D. G.

    1981-01-01

    A simplified method for the evaluation of theoretical photoelectron fluxes in the upper atmosphere resulting from the solar radiation at 304 A is presented. The calculation is based on considerations of primary and cascade (secondary) photoelectron production in the two-stream model, where photoelectron transport is described by two electron streams, one moving up and one moving down, and of loss rates due to collisions with neutral gases and thermal electrons. The calculation is illustrated for the case of photoelectrons at an energy of 24.5 eV, and it is noted that the 24.5-eV photoelectron flux may be used to monitor variations in the solar 304 A flux. Theoretical calculations based on various ionization and excitation cross sections of Banks et al. (1974) are shown to be in generally good agreement with AE-E measurements taken between 200 and 235 km, however the use of more recent, larger cross sections leads to photoelectron values a factor of two smaller than observations but in agreement with previous calculations. It is concluded that a final resolution of the photoelectron problem may depend on a reevaluation of the inelastic electron collision cross sections.

  20. Retail Building Guide for Entrance Energy Efficiency Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, J.; Kung, F.

    2012-03-01

    This booklet is based on the findings of an infiltration analysis for supermarkets and large retail buildings without refrigerated cases. It enables retail building managers and engineers to calculate the energy savings potential for vestibule additions for supermarkets; and bay door operation changes in large retail stores without refrigerated cases. Retail managers can use initial estimates to decide whether to engage vendors or contractors of vestibules for pricing or site-specific analyses, or to decide whether to test bay door operation changes in pilot stores, respectively.

  1. Scattering by Artificial Wind and Rain Roughened Water Surfaces at Oblique Incidences

    NASA Technical Reports Server (NTRS)

    Craeye, C.; Sobieski, P. W.; Bliven, L. F.

    1997-01-01

    Rain affects wind retrievals from scatterometric measurements of the sea surface. To depict the additional roughness caused by rain on a wind driven surface, we use a ring-wave spectral model. This enables us to analyse the rain effect on K(u) band scatterometric observations from two laboratory experiments. Calculations based on the small perturbation method provide good simulation of scattering measurements for the rain-only case, whereas for combined wind and rain cases, the boundary perturbation method is appropriate.

  2. Advanced Amine Solvent Formulations and Process Integration for Near-Term CO2 Capture Success

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, Kevin S.; Searcy, Katherine; Rochelle, Gary T.

    2007-06-28

    This Phase I SBIR project investigated the economic and technical feasibility of advanced amine scrubbing systems for post-combustion CO2 capture at coal-fired power plants. Numerous combinations of advanced solvent formulations and process configurations were screened for energy requirements, and three cases were selected for detailed analysis: a monoethanolamine (MEA) base case and two “advanced” cases: an MEA/Piperazine (PZ) case, and a methyldiethanolamine (MDEA) / PZ case. The MEA/PZ and MDEA/PZ cases employed an advanced “double matrix” stripper configuration. The basis for calculations was a model plant with a gross capacity of 500 MWe. Results indicated that CO2 capture increased themore » base cost of electricity from 5 cents/kWh to 10.7 c/kWh for the MEA base case, 10.1 c/kWh for the MEA / PZ double matrix, and 9.7 c/kWh for the MDEA / PZ double matrix. The corresponding cost per metric tonne CO2 avoided was 67.20 $/tonne CO2, 60.19 $/tonne CO2, and 55.05 $/tonne CO2, respectively. Derated capacities, including base plant auxiliary load of 29 MWe, were 339 MWe for the base case, 356 MWe for the MEA/PZ double matrix, and 378 MWe for the MDEA / PZ double matrix. When compared to the base case, systems employing advanced solvent formulations and process configurations were estimated to reduce reboiler steam requirements by 20 to 44%, to reduce derating due to CO2 capture by 13 to 30%, and to reduce the cost of CO2 avoided by 10 to 18%. These results demonstrate the potential for significant improvements in the overall economics of CO2 capture via advanced solvent formulations and process configurations.« less

  3. Clinicopathologic Correlation of White, Non scrapable Oral Mucosal Surface Lesions: A Study of 100 Cases.

    PubMed

    Abidullah, Mohammed; Raghunath, Vandana; Karpe, Tanveer; Akifuddin, Syed; Imran, Shahid; Dhurjati, Venkata Naga Nalini; Aleem, Mohammed Ahtesham; Khatoon, Farheen

    2016-02-01

    White, non scrapable lesions are commonly seen in the oral cavity. Based on their history and clinical appearance, most of these lesions can be easily diagnosed, but sometimes diagnosis may go wrong. In order to arrive to a confirmative diagnosis, histopathological assessment is needed in many cases, if not all. 1) To find out the prevalence of clinically diagnosed oral white, non scrapable lesions. 2) To find out the prevalence of histopathologically diagnosed oral white, non scrapable lesions. 3) To correlate the clinical and histopathological diagnosis in the above lesions. A total of 100 cases of oral white, non scrapable lesions were included in the study. Based on their history and clinical presentation, clinical provisional diagnosis was made. Then biopsy was done and confirmatory histopathological diagnosis was given and both were correlated. In order to correlate clinical and histopathological diagnosis Discrepancy Index (DI) was calculated for all the cases. Based on clinical diagnosis, there were 59 cases (59%) of leukoplakia, 29 cases (29%) of lichen planus and six cases (6%) of lichenoid reaction; whereas, based on histopathological diagnosis, there were 66 cases (66%) of leukoplakia epithelial hyperplasia and hyperkeratosis (leukoplakia) and 30 cases (30%) of lichen planus. Seventy eight clinically diagnosed cases (78%) correlated with the histopathological diagnosis and 22 cases (22%) did not correlate. The total discrepancy index was 22%. A clinician needs to be aware of oral white, non scrapable lesions. Due to the overlapping of many clinical features in some of these lesions and also due to their malignant potential, a histopathological confirmative diagnosis is recommended.

  4. MCNP-based computational model for the Leksell gamma knife.

    PubMed

    Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav

    2007-01-01

    We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large volumes such as for the total skull volume. The differences observed in treatment of scattered radiation between the MC method and the LGP may be important in this case. We have also studied the influence of differential direction sampling of primary photons and have found that, due to the anisotropic sampling, doses around the isocenter deviate from each other by up to 6%. With caution about the details of the calculation settings, it is possible to employ the MCNP Monte Carlo code for independent verification of the Leksell Gamma Knife radiation field properties.

  5. [Modelling of the costs of productivity losses due to smoking in Germany for the year 2005].

    PubMed

    Prenzler, A; Mittendorf, T; von der Schulenburg, J M

    2007-11-01

    The aim of this study was to estimate disease-related productivity costs attributable to smoking in the year 2005 in Germany. The calculation was based on the updated relative smoking-related disease risk found in the US Cancer Prevention Study II combined with data on smoking prevalence for Germany. With this, smoking-attributable cases resulting in premature mortality, invalidity, and temporal disability to work could be estimated. Neoplasms, diseases of the circulatory and the respiratory systems as well as health problems in children younger than one year were considered in the analysis. The human capital approach was applied to calculate years of potential work loss and productivity costs as a result of smoking. Various sensitivity analyses were conducted to test for robustness of the underlying model. Based on the assumptions within the model, 107,389 deaths, 14,112 invalidity cases, and 1.19 million cases of temporary disability to work were found to be due to smoking in 2005 in Germany, respectively. As a result, productivity costs of 9.6 billion were caused by smoking. The model showed that smoking has a high financial effect. Even so, further analyses are necessary to estimate an overall impact of smoking on the German society.

  6. Paraneoplastic autoantibody panels: sensitivity and specificity, a retrospective cohort.

    PubMed

    Albadareen, Rawan; Gronseth, Gary; Goeden, Marcie; Sharrock, Matthew; Lechtenberg, Colleen; Wang, Yunxia

    2017-06-01

    Experts in the autoimmune paraneoplastic field recommend autoantibody testing as "panels" to improve the poor sensitivity of individual autoantibodies in detecting paraneoplastic neurological syndromes (PNS). The sensitivity of those panels was not reported to date in a fashion devoid of incorporation bias. We aimed to assess the collective sensitivity and specificity of one of the commonly used panels in detecting PNS. A single-centered retrospective cohort of all patients tested for paraneoplastic evaluation panel (PAVAL; test ID: 83380) over one year for the suspicion of PNS. Case adjudication was based on newly proposed diagnostic criteria in line with previously published literature, but modified to exclude serological status to avoid incorporation bias. Measures of diagnostic accuracy were subsequently calculated. Cases that failed to show association with malignancy within the follow-up time studied, reflecting a possibly pure autoimmune process was considered paraneoplastic-like syndromes. Out of 321 patients tested, 51 patients tested positive. Thirty-two patients met diagnostic criteria for paraneoplastic/paraneoplastic-like syndromes. The calculated collective sensitivity was 34% (95% CI: 17-53), specificity was 86% (95% CI: 81-90), Youden's index 0.2 and a positive clinical utility index 0.07 suggesting poor utility for case-detection. This is the first reported diagnostic accuracy measures of paraneoplastic panels without incorporation bias. Despite recommended panel testing to improve detection of PNS, sensitivity remains low with poor utility for case-detection. The high-calculated specificity suggests a possible role in confirming the condition in difficult cases suspicious for PNS, when enough supportive evidence is lacking on ancillary testing.

  7. Assessing DRG cost accounting with respect to resource allocation and tariff calculation: the case of Germany

    PubMed Central

    2012-01-01

    The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the “one hospital” approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the “one hospital” model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital’s cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level. PMID:22935314

  8. Assessing DRG cost accounting with respect to resource allocation and tariff calculation: the case of Germany.

    PubMed

    Vogl, Matthias

    2012-08-30

    The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the "one hospital" approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the "one hospital" model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital's cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level.

  9. Importance of Ambipolar Electric Field in the Ion Loss from Mars- Results from a Multi-fluid MHD Model with the Electron Pressure Equation Included

    NASA Astrophysics Data System (ADS)

    Ma, Y.; Dong, C.; van der Holst, B.; Nagy, A. F.; Bougher, S. W.; Toth, G.; Cravens, T.; Yelle, R. V.; Jakosky, B. M.

    2017-12-01

    The multi-fluid (MF) magnetohydrodynamic (MHD) model of Mars is further improved by solving an additional electron pressure equation. Through the electron pressure equation, the electron temperature is calculated based on the effects from various electrons related heating and cooling processes (e.g. photo-electron heating, electron-neutral collision and electron-ion collision), and thus the improved model is able to calculate the electron temperature and the electron pressure force self-consistently. Electron thermal conductivity is also considered in the calculation. Model results of a normal case with electron pressure equation included (MFPe) are compared in detail to an identical case using the regular MF model to identify the effect of the improved physics. We found that when the electron pressure equation is included, the general interaction patterns are similar to that of the case with no electron pressure equation. The model with electron pressure equation predicts that electron temperature is much larger than the ion temperature in the ionosphere, consistent with both Viking and MAVEN observations. The inclusion of electron pressure equation significantly increases the total escape fluxes predicted by the model, indicating the importance of the ambipolar electric field(electron pressure gradient) in driving the ion loss from Mars.

  10. Accuracy of Protein Embedding Potentials: An Analysis in Terms of Electrostatic Potentials.

    PubMed

    Olsen, Jógvan Magnus Haugaard; List, Nanna Holmgaard; Kristensen, Kasper; Kongsted, Jacob

    2015-04-14

    Quantum-mechanical embedding methods have in recent years gained significant interest and may now be applied to predict a wide range of molecular properties calculated at different levels of theory. To reach a high level of accuracy in embedding methods, both the electronic structure model of the active region and the embedding potential need to be of sufficiently high quality. In fact, failures in quantum mechanics/molecular mechanics (QM/MM)-based embedding methods have often been associated with the QM/MM methodology itself; however, in many cases the reason for such failures is due to the use of an inaccurate embedding potential. In this paper, we investigate in detail the quality of the electronic component of embedding potentials designed for calculations on protein biostructures. We show that very accurate explicitly polarizable embedding potentials may be efficiently designed using fragmentation strategies combined with single-fragment ab initio calculations. In fact, due to the self-interaction error in Kohn-Sham density functional theory (KS-DFT), use of large full-structure quantum-mechanical calculations based on conventional (hybrid) functionals leads to less accurate embedding potentials than fragment-based approaches. We also find that standard protein force fields yield poor embedding potentials, and it is therefore not advisable to use such force fields in general QM/MM-type calculations of molecular properties other than energies and structures.

  11. Surface Coverage and Metallicity of ZnO Surfaces from First-Principles Calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Schleife, Andre; The Schleife research Group Team

    Zinc oxide (ZnO) surfaces are widely used in different applications such as catalysis, biosensing, and solar cells. These surfaces are, in many cases, chemically terminated by hydroxyl groups. In experiment, a transition of the ZnO surface electronic properties from semiconducting to metallic was reported upon increasing the hydroxyl coverage to more than approximately 80 %. The reason for this transition is not well understood yet. We report on first-principles calculations based on density functional theory for the ZnO [ 10 1 0 ] surface, taking different amounts of hydroxyl coverage into account. We calculated band structures for fully relaxed configurations and verified the existence of this transition. However, we only find the fully covered surface to be metallic. We thus explore the possibility for clustering of the surface-terminating hydroxyl groups based on total-energy calculations. We also found that the valence band maximum consists of oxygen p states from both the surface hydroxyl groups and the surface oxygen atoms of the material. The main contribution to the metallicity is found to be from the hydroxyl groups.

  12. Application of a prospective model for calculating worker exposure due to the air pathway for operations in a laboratory.

    PubMed

    Grimbergen, T W M; Wiegman, M M

    2007-01-01

    In order to arrive at recommendations for guidelines on maximum allowable quantities of radioactive material in laboratories, a proposed mathematical model was used for the calculation of transfer fractions for the air pathway. A set of incident scenarios was defined, including spilling, leakage and failure of the fume hood. For these 'common incidents', dose constraints of 1 mSv and 0.1 mSv are proposed in case the operations are being performed in a controlled area and supervised area, respectively. In addition, a dose constraint of 1 microSv is proposed for each operation under regular working conditions. Combining these dose constraints and the transfer fractions calculated with the proposed model, maximum allowable quantities were calculated for different laboratory operations and situations. Provided that the calculated transfer fractions can be experimentally validated and the dose constraints are acceptable, it can be concluded from the results that the dose constraint for incidents is the most restrictive one. For non-volatile materials this approach leads to quantities much larger than commonly accepted. In those cases, the results of the calculations in this study suggest that limitation of the quantity of radioactive material, which can be handled safely, should be based on other considerations than the inhalation risks. Examples of such considerations might be the level of external exposure, uncontrolled spread of radioactive material by surface contamination, emissions in the environment and severe accidents like fire.

  13. Disability weights based on patient-reported data from a multinational injury cohort

    PubMed Central

    Lyons, Ronan A; Simpson, Pamela M; Rivara, Frederick P; Ameratunga, Shanthi; Polinder, Suzanne; Derrett, Sarah; Harrison, James E

    2016-01-01

    Abstract Objective To create patient-based disability weights for individual injury diagnosis codes and nature-of-injury classifications, for use, as an alternative to panel-based weights, in studies on the burden of disease. Methods Self-reported data based on the EQ-5D standardized measure of health status were collected from 29 770 participants in the Injury-VIBES injury cohort study, which covered Australia, the Netherlands, New Zealand, the United Kingdom of Great Britain and Northern Ireland and the United States of America. The data were combined to calculate new disability weights for each common injury classification and for each type of diagnosis covered by the 10th revision of the International statistical classification of diseases and related health problems. Weights were calculated separately for hospital admissions and presentations confined to emergency departments. Findings There were 29 770 injury cases with at least one EQ-5D score. The mean age of the participants providing data was 51 years. Most participants were male and almost a third had road traffic injuries. The new disability weights were higher for admitted cases than for cases confined to emergency departments and higher than the corresponding weights used by the Global Burden of Disease 2013 study. Long-term disability was common in most categories of injuries. Conclusion Injury is often a chronic disorder and burden of disease estimates should reflect this. Application of the new weights to burden studies would substantially increase estimates of disability-adjusted life-years and provide a more accurate reflection of the impact of injuries on peoples’ lives. PMID:27821883

  14. Calculating observables in inhomogeneous cosmologies. Part I: general framework

    NASA Astrophysics Data System (ADS)

    Hellaby, Charles; Walters, Anthony

    2018-02-01

    We lay out a general framework for calculating the variation of a set of cosmological observables, down the past null cone of an arbitrarily placed observer, in a given arbitrary inhomogeneous metric. The observables include redshift, proper motions, area distance and redshift-space density. Of particular interest are observables that are zero in the spherically symmetric case, such as proper motions. The algorithm is based on the null geodesic equation and the geodesic deviation equation, and it is tailored to creating a practical numerical implementation. The algorithm provides a method for tracking which light rays connect moving objects to the observer at successive times. Our algorithm is applied to the particular case of the Szekeres metric. A numerical implementation has been created and some results will be presented in a subsequent paper. Future work will explore the range of possibilities.

  15. Influence of strain on dislocation core in silicon

    NASA Astrophysics Data System (ADS)

    Pizzagalli, L.; Godet, J.; Brochard, S.

    2018-05-01

    First principles, density functional-based tight binding and semi-empirical interatomic potentials calculations are performed to analyse the influence of large strains on the structure and stability of a 60? dislocation in silicon. Such strains typically arise during the mechanical testing of nanostructures like nanopillars or nanoparticles. We focus on bi-axial strains in the plane normal to the dislocation line. Our calculations surprisingly reveal that the dislocation core structure largely depends on the applied strain, for strain levels of about 5%. In the particular case of bi-axial compression, the transformation of the dislocation to a locally disordered configuration occurs for similar strain magnitudes. The formation of an opening, however, requires larger strains, of about 7.5%. Furthermore, our results suggest that electronic structure methods should be favoured to model dislocation cores in case of large strains whenever possible.

  16. Reconstruction of radial thermal conductivity depth profile in case hardened steel rods

    NASA Astrophysics Data System (ADS)

    Celorrio, Ricardo; Mendioroz, Arantza; Apiñaniz, Estibaliz; Salazar, Agustín; Wang, Chinhua; Mandelis, Andreas

    2009-04-01

    In this work the surface thermal-wave field (ac temperature) of a solid cylinder illuminated by a modulated light beam is calculated first in two cases: a multilayered cylinder and a cylinder the radial thermal conductivity of which varies continuously. It is demonstrated numerically that, using a few layers of different thicknesses, the surface thermal-wave field of a cylindrical sample with continuously varying radial thermal conductivity can be calculated with high accuracy. Next, an inverse procedure based on the multilayered model is used to reconstruct the radial thermal conductivity profile of hardened C1018 steel rods, the surface temperature of which was measured by photothermal radiometry. The reconstructed thermal conductivity depth profile has a similar shape to those found for flat samples of this material and shows a qualitative anticorrelation with the hardness depth profile.

  17. Two-dimensional boron: Lightest catalyst for hydrogen and oxygen evolution reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mir, Showkat H.; Chakraborty, Sudip, E-mail: sudiphys@gmail.com, E-mail: prakash.jha@cug.ac.in; Wärnå, John

    The hydrogen evolution reaction (HER) and the oxygen evolution reaction (OER) have been envisaged on a two-dimensional (2D) boron sheet through electronic structure calculations based on a density functional theory framework. To date, boron sheets are the lightest 2D material and, therefore, exploring the catalytic activity of such a monolayer system would be quite intuitive both from fundamental and application perspectives. We have functionalized the boron sheet (BS) with different elemental dopants like carbon, nitrogen, phosphorous, sulphur, and lithium and determined the adsorption energy for each case while hydrogen and oxygen are on top of the doping site of themore » boron sheet. The free energy calculated from the individual adsorption energy for each functionalized BS subsequently guides us to predict which case of functionalization serves better for the HER or the OER.« less

  18. Validation of a personalized dosimetric evaluation tool (Oedipe) for targeted radiotherapy based on the Monte Carlo MCNPX code

    NASA Astrophysics Data System (ADS)

    Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.

    2006-02-01

    Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.

  19. Approaches to reducing photon dose calculation errors near metal implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Jessie Y.; Followill, David S.; Howell, Reb

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less

  20. Log file-based patient dose calculations of double-arc VMAT for head-and-neck radiotherapy.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Majima, Kazuhiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi

    2018-04-01

    The log file-based method cannot display dosimetric changes due to linac component miscalibration because of the insensitivity of log files to linac component miscalibration. The purpose of this study was to supply dosimetric changes in log file-based patient dose calculations for double-arc volumetric-modulated arc therapy (VMAT) in head-and-neck cases. Fifteen head-and-neck cases participated in this study. For each case, treatment planning system (TPS) doses were produced by double-arc and single-arc VMAT. Miscalibration-simulated log files were generated by inducing a leaf miscalibration of ±0.5 mm into the log files that were acquired during VMAT irradiation. Subsequently, patient doses were estimated using the miscalibration-simulated log files. For double-arc VMAT, regarding planning target volume (PTV), the change from TPS dose to miscalibration-simulated log file dose in D mean was 0.9 Gy and that for tumor control probability was 1.4%. As for organ-at-risks (OARs), the change in D mean was <0.7 Gy and normal tissue complication probability was <1.8%. A comparison between double-arc and single-arc VMAT for PTV showed statistically significant differences in the changes evaluated by D mean and radiobiological metrics (P < 0.01), even though the magnitude of these differences was small. Similarly, for OARs, the magnitude of these changes was found to be small. Using the log file-based method for PTV and OARs, the log file-based method estimate of patient dose using the double-arc VMAT has accuracy comparable to that obtained using the single-arc VMAT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Study on Brain Injury Biomechanics Based on the Real Pedestrian Traffic Accidents

    NASA Astrophysics Data System (ADS)

    Feng, Chengjian; Yin, Zhiyong

    This paper aimed to research the dynamic response and injury mechanisms of head based on real pedestrian traffic accidents with video. The kinematics of head contact with the vehicle was reconstructed by using multi-body dynamics models. These calculated parameters such as head impact velocity and impact location and head orientation were applied to the THUMS-4 FE head model as initial conditions. The intracranial pressure and stress of brain were calculated from simulations of head contact with the vehicle. These results were consistent with that of others. It was proved that real traffic accidents combined with simulation analysis can be used to study head injury biomechanics. Increasing in the number of cases, a tolerance limit of brain injury will be put forward.

  2. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  3. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  4. Numerical simulation and fracture identification of dual laterolog in organic shale

    NASA Astrophysics Data System (ADS)

    Maojin, Tan; Peng, Wang; Qiong, Liu

    2012-09-01

    Fracture is one of important spaces in shale oil and shale gas reservoirs, and fractures identification and evaluation are an important part in organic shale interpretation. According to the fractured shale gas reservoir, a physical model is set up to study the dual laterolog logging responses. First, based on the principle of dual laterolog, three-dimensional finite element method (FEM) is used to simulate the dual laterolog responses in various formation models with different fractures widths, different fracture numbers, different fractures inclination angle. All the results are extremely important for the fracture identification and evaluation in shale reservoirs. Appointing to different base rock resistivity models, the fracture models are constructed respectively through a number of numerical simulation, and the fracture porosity can be calculated by solving the corresponding formulas. A case study about organic shale formation is analyst and discussed, and the fracture porosity is calculated from dual laterolog. The fracture evaluation results are also be validated right by Full borehole Micro-resistivity Imaging (FMI). So, in case of the absence of borehole resistivity imaging log, the dual laterolog resistivity can be used to estimate the fracture development.

  5. Small strain multiphase-field model accounting for configurational forces and mechanical jump conditions

    NASA Astrophysics Data System (ADS)

    Schneider, Daniel; Schoof, Ephraim; Tschukin, Oleg; Reiter, Andreas; Herrmann, Christoph; Schwab, Felix; Selzer, Michael; Nestler, Britta

    2018-03-01

    Computational models based on the phase-field method have become an essential tool in material science and physics in order to investigate materials with complex microstructures. The models typically operate on a mesoscopic length scale resolving structural changes of the material and provide valuable information about the evolution of microstructures and mechanical property relations. For many interesting and important phenomena, such as martensitic phase transformation, mechanical driving forces play an important role in the evolution of microstructures. In order to investigate such physical processes, an accurate calculation of the stresses and the strain energy in the transition region is indispensable. We recall a multiphase-field elasticity model based on the force balance and the Hadamard jump condition at the interface. We show the quantitative characteristics of the model by comparing the stresses, strains and configurational forces with theoretical predictions in two-phase cases and with results from sharp interface calculations in a multiphase case. As an application, we choose the martensitic phase transformation process in multigrain systems and demonstrate the influence of the local homogenization scheme within the transition regions on the resulting microstructures.

  6. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  7. Validity of a computerized population registry of dementia based on clinical databases.

    PubMed

    Mar, J; Arrospide, A; Soto-Gordoa, M; Machón, M; Iruin, Á; Martinez-Lage, P; Gabilondo, A; Moreno-Izco, F; Gabilondo, A; Arriola, L

    2018-05-08

    The handling of information through digital media allows innovative approaches for identifying cases of dementia through computerized searches within the clinical databases that include systems for coding diagnoses. The aim of this study was to analyze the validity of a dementia registry in Gipuzkoa based on the administrative and clinical databases existing in the Basque Health Service. This is a descriptive study based on the evaluation of available data sources. First, through review of medical records, the diagnostic validity was evaluated in 2 samples of cases identified and not identified as dementia. The sensitivity, specificity and positive and negative predictive value of the diagnosis of dementia were measured. Subsequently, the cases of living dementia in December 31, 2016 were searched in the entire Gipuzkoa population to collect sociodemographic and clinical variables. The validation samples included 986 cases and 327 no cases. The calculated sensitivity was 80.2% and the specificity was 99.9%. The negative predictive value was 99.4% and positive value was 95.1%. The cases in Gipuzkoa were 10,551, representing 65% of the cases predicted according to the literature. Antipsychotic medication were taken by a 40% and a 25% of the cases were institutionalized. A registry of dementias based on clinical and administrative databases is valid and feasible. Its main contribution is to show the dimension of dementia in the health system. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Uncertainty quantification for accident management using ACE surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.

    The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known tomore » be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)« less

  9. UV Lidar Receiver Analysis for Tropospheric Sensing of Ozone

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; DeYoung, Russell J.

    2013-01-01

    A simulation of a ground based Ultra-Violet Differential Absorption Lidar (UV-DIAL) receiver system was performed under realistic daytime conditions to understand how range and lidar performance can be improved for a given UV pulse laser energy. Calculations were also performed for an aerosol channel transmitting at 3 W. The lidar receiver simulation studies were optimized for the purpose of tropospheric ozone measurements. The transmitted lidar UV measurements were from 285 to 295 nm and the aerosol channel was 527-nm. The calculations are based on atmospheric transmission given by the HITRAN database and the Modern Era Retrospective Analysis for Research and Applications (MERRA) meteorological data. The aerosol attenuation is estimated using both the BACKSCAT 4.0 code as well as data collected during the CALIPSO mission. The lidar performance is estimated for both diffuseirradiance free cases corresponding to nighttime operation as well as the daytime diffuse scattered radiation component based on previously reported experimental data. This analysis presets calculations of the UV-DIAL receiver ozone and aerosol measurement range as a function of sky irradiance, filter bandwidth and laser transmitted UV and 527-nm energy

  10. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop.

    PubMed

    Li, Lian-Hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.

  11. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop

    PubMed Central

    Li, Lian-hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758

  12. Risk-based containment and air monitoring criteria for work with dispersible radioactive materials.

    PubMed

    Veluri, Venkateswara Rao; Justus, Alan L

    2013-04-01

    This paper presents readily understood, technically defensible, risk-based containment and air monitoring criteria, which are developed from fundamental physical principles. The key for the development of each criterion was the use of a calculational de minimis level, in this case chosen to be 100 mrem (or 40 DAC-h). Examples are provided that demonstrate the effective use of each criterion. Comparison to other often used criteria is provided.

  13. Study of dose calculation on breast brachytherapy using prism TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendriani, Yoza; Haryanto, Freddy

    2015-09-30

    PRISM is one of non-commercial Treatment Planning System (TPS) and is developed at the University of Washington. In Indonesia, many cancer hospitals use expensive commercial TPS. This study aims to investigate Prism TPS which been applied to the dose distribution of brachytherapy by taking into account the effect of source position and inhomogeneities. The results will be applicable for clinical Treatment Planning System. Dose calculation has been implemented for water phantom and CT scan images of breast cancer using point source and line source. This study used point source and line source and divided into two cases. On the firstmore » case, Ir-192 seed source is located at the center of treatment volume. On the second case, the source position is gradually changed. The dose calculation of every case performed on a homogeneous and inhomogeneous phantom with dimension 20 × 20 × 20 cm{sup 3}. The inhomogeneous phantom has inhomogeneities volume 2 × 2 × 2 cm{sup 3}. The results of dose calculations using PRISM TPS were compared to literature data. From the calculation of PRISM TPS, dose rates show good agreement with Plato TPS and other study as published by Ramdhani. No deviations greater than ±4% for all case. Dose calculation in inhomogeneous and homogenous cases show similar result. This results indicate that Prism TPS is good in dose calculation of brachytherapy but not sensitive for inhomogeneities. Thus, the dose calculation parameters developed in this study were found to be applicable for clinical treatment planning of brachytherapy.« less

  14. Program VSAERO theory document: A computer program for calculating nonlinear aerodynamic characteristics of arbitrary configurations

    NASA Technical Reports Server (NTRS)

    Maskew, Brian

    1987-01-01

    The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.

  15. Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.

    PubMed

    Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H

    2015-11-01

    Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  17. Comparative Human Health Impact Assessment of Engineered Nanomaterials in the Framework of Life Cycle Assessment.

    PubMed

    Fransman, Wouter; Buist, Harrie; Kuijpers, Eelco; Walser, Tobias; Meyer, David; Zondervan-van den Beuken, Esther; Westerhout, Joost; Klein Entink, Rinke H; Brouwer, Derk H

    2017-07-01

    For safe innovation, knowledge on potential human health impacts is essential. Ideally, these impacts are considered within a larger life-cycle-based context to support sustainable development of new applications and products. A methodological framework that accounts for human health impacts caused by inhalation of engineered nanomaterials (ENMs) in an indoor air environment has been previously developed. The objectives of this study are as follows: (i) evaluate the feasibility of applying the CF framework for NP exposure in the workplace based on currently available data; and (ii) supplement any resulting knowledge gaps with methods and data from the life cycle approach and human risk assessment (LICARA) project to develop a modified case-specific version of the framework that will enable near-term inclusion of NP human health impacts in life cycle assessment (LCA) using a case study involving nanoscale titanium dioxide (nanoTiO 2 ). The intent is to enhance typical LCA with elements of regulatory risk assessment, including its more detailed measure of uncertainty. The proof-of-principle demonstration of the framework highlighted the lack of available data for both the workplace emissions and human health effects of ENMs that is needed to calculate generalizable characterization factors using common human health impact assessment practices in LCA. The alternative approach of using intake fractions derived from workplace air concentration measurements and effect factors based on best-available toxicity data supported the current case-by-case approach for assessing the human health life cycle impacts of ENMs. Ultimately, the proposed framework and calculations demonstrate the potential utility of integrating elements of risk assessment with LCA for ENMs once the data are available. © 2016 Society for Risk Analysis.

  18. Cosmic radiation increases the risk of nuclear cataract in airline pilots: a population-based case-control study.

    PubMed

    Rafnsson, Vilhjalmur; Olafsdottir, Eydis; Hrafnkelsson, Jon; Sasaki, Hiroshi; Arnarsson, Arsaell; Jonasson, Fridbert

    2005-08-01

    Aviation involves exposure to ionizing radiation of cosmic origin. The association between lesions of the ocular lens and ionizing radiation is well-known. To investigate whether employment as a commercial airline pilot and the resulting exposure to cosmic radiation is associated with lens opacification. This is a population-based case-control study of 445 men. Lens opacification was classified into 4 types using the World Health Organization simplified grading system. These 4 types, serving as cases, included 71 persons with nuclear cataracts, 102 with cortical lens opacification, 69 with central optical zone involvement, and 32 with posterior subcapsular lens opacification. Control subjects are those with a different type of lens opacification or without lens opacification. Exposure was assessed based on employment time as pilots, annual number of hours flown on each aircraft type, time tables, flight profiles, and individual cumulative radiation doses (in millisieverts) calculated by a software program. Odds ratios were calculated using logistic regression. The odds ratio for nuclear cataract risk among cases and controls was 3.02 (95% confidence interval, 1.44-6.35) for pilots compared with nonpilots, adjusted for age, smoking status, and sunbathing habits. The odds ratio for nuclear cataract associated with estimation of cumulative radiation dose (in millisieverts) to the age of 40 years was 1.06 (95% confidence interval, 1.02-1.10), adjusted for age, smoking status, and sunbathing habits. The association between the cosmic radiation exposure of pilots and the risk of nuclear cataracts, adjusted for age, smoking status, and sunbathing habits, indicates that cosmic radiation may be a causative factor in nuclear cataracts among commercial airline pilots.

  19. Cost-minimization model of a multidisciplinary antibiotic stewardship team based on a successful implementation on a urology ward of an academic hospital.

    PubMed

    Dik, Jan-Willem H; Hendrix, Ron; Friedrich, Alex W; Luttjeboer, Jos; Panday, Prashant Nannan; Wilting, Kasper R; Lo-Ten-Foe, Jerome R; Postma, Maarten J; Sinha, Bhanu

    2015-01-01

    In order to stimulate appropriate antimicrobial use and thereby lower the chances of resistance development, an Antibiotic Stewardship Team (A-Team) has been implemented at the University Medical Center Groningen, the Netherlands. Focus of the A-Team was a pro-active day 2 case-audit, which was financially evaluated here to calculate the return on investment from a hospital perspective. Effects were evaluated by comparing audited patients with a historic cohort with the same diagnosis-related groups. Based upon this evaluation a cost-minimization model was created that can be used to predict the financial effects of a day 2 case-audit. Sensitivity analyses were performed to deal with uncertainties. Finally, the model was used to financially evaluate the A-Team. One whole year including 114 patients was evaluated. Implementation costs were calculated to be €17,732, which represent total costs spent to implement this A-Team. For this specific patient group admitted to a urology ward and consulted on day 2 by the A-Team, the model estimated total savings of €60,306 after one year for this single department, leading to a return on investment of 5.9. The implemented multi-disciplinary A-Team performing a day 2 case-audit in the hospital had a positive return on investment caused by a reduced length of stay due to a more appropriate antibiotic therapy. Based on the extensive data analysis, a model of this intervention could be constructed. This model could be used by other institutions, using their own data to estimate the effects of a day 2 case-audit in their hospital.

  20. Obesity interacts with infectious mononucleosis in risk of multiple sclerosis

    PubMed Central

    Hedström, A K; Lima Bomfim, I; Hillert, J; Olsson, T; Alfredsson, L

    2015-01-01

    Background and purpose The possible interaction between adolescent obesity and past infectious mononucleosis (IM) was investigated with regard to multiple sclerosis (MS) risk. Methods This report is based on two population-based case–control studies, one with incident cases (1780 cases, 3885 controls) and one with prevalent cases (4502 cases, 4039 controls). Subjects were categorized based on adolescent body mass index (BMI) and past IM and compared with regard to occurrence of MS by calculating odds ratios with 95% confidence intervals (CIs) employing logistic regression. A potential interaction between adolescent BMI and past IM was evaluated by calculating the attributable proportion due to interaction. Results Regardless of human leukocyte antigen (HLA) status, a substantial interaction was observed between adolescent obesity and past IM with regard to MS risk. The interaction was most evident when IM after the age of 10 was considered (attributable proportion due to interaction 0.8, 95% CI 0.6–1.0 in the incident study, and attributable proportion due to interaction 0.7, 95% CI 0.5–1.0 in the prevalent study). In the incident study, the odds ratio of MS was 14.7 (95% CI 5.9–36.6) amongst subjects with adolescent obesity and past IM after the age of 10, compared with subjects with none of these exposures. The corresponding odds ratio in the prevalent study was 13.2 (95% CI 5.2–33.6). Conclusions An obese state both impacts the cellular immune response to infections and induces a state of chronic immune-mediated inflammation which may contribute to explain our finding of an interaction between adolescent BMI and past IM. Measures taken against adolescent obesity may thus be a preventive strategy against MS. PMID:25530445

  1. An Efficient Implementation of the Nwat-MMGBSA Method to Rescore Docking Results in Medium-Throughput Virtual Screenings

    NASA Astrophysics Data System (ADS)

    Maffucci, Irene; Hu, Xiao; Fumagalli, Valentina; Contini, Alessandro

    2018-03-01

    Nwat-MMGBSA is a variant of MM-PB/GBSA based on the inclusion of a number of explicit water molecules that are the closest to the ligand in each frame of a molecular dynamics trajectory. This method demonstrated improved correlations between calculated and experimental binding energies in both protein-protein interactions and ligand-receptor complexes, in comparison to the standard MM-GBSA. A protocol optimization, aimed to maximize efficacy and efficiency, is discussed here considering penicillopepsin, HIV1-protease, and BCL-XL as test cases. Calculations were performed in triplicates on both classic HPC environments and on standard workstations equipped by a GPU card, evidencing no statistical differences in the results. No relevant differences in correlation to experiments were also observed when performing Nwat-MMGBSA calculations on 4 ns or 1 ns long trajectories. A fully automatic workflow for structure-based virtual screening, performing from library set-up to docking and Nwat-MMGBSA rescoring, has then been developed. The protocol has been tested against no rescoring or standard MM-GBSA rescoring within a retrospective virtual screening of inhibitors of AmpC β-lactamase and of the Rac1-Tiam1 protein-protein interaction. In both cases, Nwat-MMGBSA rescoring provided a statistically significant increase in the ROC AUCs of between 20% and 30%, compared to docking scoring or to standard MM-GBSA rescoring.

  2. Using digital inpainting to estimate incident light intensity for the calculation of red blood cell oxygen saturation from microscopy images.

    PubMed

    Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G

    2018-05-25

    Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Case-based reimbursement for psychiatric hospital care.

    PubMed

    Sederer, L I; Eisen, S V; Dill, D; Grob, M C; Gougeon, M L; Mirin, S M

    1992-11-01

    A fixed-prepayment system (case-based reimbursement) for patients initially requiring hospital-level care was evaluated for one year through an arrangement between a private nonprofit psychiatric hospital and a self-insured company desiring to provide psychiatric services to its employees. This clinical and financial experiment offered a means of containing costs while monitoring quality of care. A two-group, case-control study was undertaken of treatment outcomes at discharge, patient satisfaction with hospital care, and service use and costs during the program's first year. Compared with costs for patients in the control group, costs for those in the program were lower per patient and per admission; cumulative costs for patients requiring rehospitalization were also lower. However, costs for outpatient services for patients in the program were not calculated. Treatment outcomes and patients' satisfaction with hospital care were comparable for the two groups.

  4. Two-particle correlation function and dihadron correlation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vechernin, V. V., E-mail: v.vechernin@spbu.ru; Ivanov, K. O.; Neverov, D. I.

    It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function {sub C}{sup 2}. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton–nucleus and deuteron–nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated inmore » rapidity makes it possible to determine correctly the canonical pair correlation function C{sub 2} for all cases, including the case where the rapidity distribution of product particles is not uniform.« less

  5. Application of the Deformation Information System for automated analysis and mapping of mining terrain deformations - case study from SW Poland

    NASA Astrophysics Data System (ADS)

    Blachowski, Jan; Grzempowski, Piotr; Milczarek, Wojciech; Nowacka, Anna

    2015-04-01

    Monitoring, mapping and modelling of mining induced terrain deformations are important tasks for quantifying and minimising threats that arise from underground extraction of useful minerals and affect surface infrastructure, human safety, the environment and security of the mining operation itself. The number of methods and techniques used for monitoring and analysis of mining terrain deformations is wide and expanding with the progress in geographical information technologies. These include for example: terrestrial geodetic measurements, Global Navigation Satellite Systems, remote sensing, GIS based modelling and spatial statistics, finite element method modelling, geological modelling, empirical modelling using e.g. the Knothe theory, artificial neural networks, fuzzy logic calculations and other. The presentation shows the results of numerical modelling and mapping of mining terrain deformations for two cases of underground mining sites in SW Poland, hard coal one (abandoned) and copper ore (active) using the functionalities of the Deformation Information System (DIS) (Blachowski et al, 2014 @ http://meetingorganizer.copernicus.org/EGU2014/EGU2014-7949.pdf). The functionalities of the spatial data modelling module of DIS have been presented and its applications in modelling, mapping and visualising mining terrain deformations based on processing of measurement data (geodetic and GNSS) for these two cases have been characterised and compared. These include, self-developed and implemented in DIS, automation procedures for calculating mining terrain subsidence with different interpolation techniques, calculation of other mining deformation parameters (i.e. tilt, horizontal displacement, horizontal strain and curvature), as well as mapping mining terrain categories based on classification of the values of these parameters as used in Poland. Acknowledgments. This work has been financed from the National Science Centre Project "Development of a numerical method of mining ground deformation modelling in complex geological and mining conditions" UMO-2012/07/B/ST10/04297 executed at the Faculty of Geoengineering, Mining and Geology of the Wroclaw University of Technology (Poland).

  6. Thermodynamics of surface defects at the aspirin/water interface

    NASA Astrophysics Data System (ADS)

    Schneider, Julian; Zheng, Chen; Reuter, Karsten

    2014-09-01

    We present a simulation scheme to calculate defect formation free energies at a molecular crystal/water interface based on force-field molecular dynamics simulations. To this end, we adopt and modify existing approaches to calculate binding free energies of biological ligand/receptor complexes to be applicable to common surface defects, such as step edges and kink sites. We obtain statistically accurate and reliable free energy values for the aspirin/water interface, which can be applied to estimate the distribution of defects using well-established thermodynamic relations. As a show case we calculate the free energy upon dissolving molecules from kink sites at the interface. This free energy can be related to the solubility concentration and we obtain solubility values in excellent agreement with experimental results.

  7. An Investigation of Two Finite Element Modeling Solutions for Biomechanical Simulation Using a Case Study of a Mandibular Bone.

    PubMed

    Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing

    2017-12-01

    The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.

  8. Design of Stripping Columns Applied to Drinking Water to Minimize Carcinogenic Risk from Trihalomethanes (THMs)

    PubMed Central

    Canosa, Joel

    2018-01-01

    The aim of this study is the application of a software tool to the design of stripping columns to calculate the removal of trihalomethanes (THMs) from drinking water. The tool also allows calculating the rough capital cost of the column and the decrease in carcinogenic risk indeces associated with the elimination of THMs and, thus, the investment to save a human life. The design of stripping columns includes the determination, among other factors, of the height (HOG), the theoretical number of plates (NOG), and the section (S) of the columns based on the study of pressure drop. These results have been compared with THM stripping literature values, showing that simulation is sufficiently conservative. Three case studies were chosen to apply the developed software. The first case study was representative of small-scale application to a community in Córdoba (Spain) where chloroform is predominant and has a low concentration. The second case study was of an intermediate scale in a region in Venezuela, and the third case study was representative of large-scale treatment of water in the Barcelona metropolitan region (Spain). Results showed that case studies with larger scale and higher initial risk offer the best capital investment to decrease the risk. PMID:29562670

  9. 42 CFR 484.220 - Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-day episode payment rate for case-mix and area wage levels. 484.220 Section 484.220 Public Health... Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and area wage levels... case-mix using a case-mix index to explain the relative resource utilization of different patients. To...

  10. 42 CFR 484.220 - Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-day episode payment rate for case-mix and area wage levels. 484.220 Section 484.220 Public Health... Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and area wage levels... case-mix using a case-mix index to explain the relative resource utilization of different patients. To...

  11. 42 CFR 484.220 - Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-day episode payment rate for case-mix and area wage levels. 484.220 Section 484.220 Public Health... Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and area wage levels... case-mix using a case-mix index to explain the relative resource utilization of different patients. To...

  12. 42 CFR 484.220 - Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-day episode payment rate for case-mix and area wage levels. 484.220 Section 484.220 Public Health... Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and area wage levels... case-mix using a case-mix index to explain the relative resource utilization of different patients. To...

  13. Microwave signatures of ice hydrometeors from ground-based observations above Summit, Greenland

    DOE PAGES

    Pettersen, Claire; Bennartz, Ralf; Kulie, Mark S.; ...

    2016-04-15

    Multi-instrument, ground-based measurements provide unique and comprehensive data sets of the atmosphere for a specific location over long periods of time and resulting data compliment past and existing global satellite observations. Our paper explores the effect of ice hydrometeors on ground-based, high-frequency passive microwave measurements and attempts to isolate an ice signature for summer seasons at Summit, Greenland, from 2010 to 2013. Furthermore, data from a combination of passive microwave, cloud radar, radiosonde, and ceilometer were examined to isolate the ice signature at microwave wavelengths. By limiting the study to a cloud liquid water path of 40 g m -2more » or less, the cloud radar can identify cases where the precipitation was dominated by ice. These cases were examined using liquid water and gas microwave absorption models, and brightness temperatures were calculated for the high-frequency microwave channels: 90, 150, and 225GHz. By comparing the measured brightness temperatures from the microwave radiometers and the calculated brightness temperature using only gas and liquid contributions, any residual brightness temperature difference is due to emission and scattering of microwave radiation from the ice hydrometeors in the column. The ice signature in the 90, 150, and 225 GHz channels for the Summit Station summer months was isolated. Then, this measured ice signature was compared to an equivalent brightness temperature difference calculated with a radiative transfer model including microwave single-scattering properties for several ice habits. Furthermore, initial model results compare well against the 4 years of summer season isolated ice signature in the high-frequency microwave channels.« less

  14. Accurate calculation of conformational free energy differences in explicit water: the confinement-solvation free energy approach.

    PubMed

    Esque, Jeremy; Cecchini, Marco

    2015-04-23

    The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent.

  15. Determination of stress intensity factors for interface cracks under mixed-mode loading

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.

  16. Brain stem/brain stem occipital bone ratio and the four-line view in nuchal translucency images of fetuses with open spina bifida.

    PubMed

    Iuculano, Ambra; Zoppi, Maria Angelica; Piras, Alessandra; Arras, Maurizio; Monni, Giovanni

    2014-09-10

    Abstract Objective: Brain stem depth/brain stem occipital bone distance (BS/BSOB ratio) and the four-line view, in images obtained for nuchal translucency (NT) screening in fetuses with open spina bifida (OSB). Methods: Single center, retrospective study based on the assessment of NT screening images of fetuses with OSB. A ratio between the BS depth and the BSOB distance was calculated (BS/BSOB ratio) and the four-line view observed, and the sensitivity for a BS/BSOB ratio superior/equal to 1, and for the lack of detection of the four-line view were calculated. Results: There were 17 cases of prenatal diagnosis OSB. In six cases, the suspicion on OSB was raised during NT screening, in six cases, the diagnosis was made before 20 weeks and in five cases during anomaly scan. The BS/BSOB ratio was superior/equal to 1 in all 17 cases, and three lines, were visualized in 15/17 images of the OSB cases, being the sensitivity 100% (95% CI, 81 to 100%) and 88% (95% CI, 65 to 96%). Conclusion: Assessment of BS/BSOB ratio and four-line view in NT images is feasible detecting affected by OSB with high sensitivity. The presence of associated anomalies or of an enlarged NT enhances the early detection.

  17. Clinicopathologic Correlation of White, Non scrapable Oral Mucosal Surface Lesions: A Study of 100 Cases

    PubMed Central

    Raghunath, Vandana; Karpe, Tanveer; Akifuddin, Syed; Imran, Shahid; Dhurjati, Venkata Naga Nalini; Aleem, Mohammed Ahtesham; Khatoon, Farheen

    2016-01-01

    Introduction White, non scrapable lesions are commonly seen in the oral cavity. Based on their history and clinical appearance, most of these lesions can be easily diagnosed, but sometimes diagnosis may go wrong. In order to arrive to a confirmative diagnosis, histopathological assessment is needed in many cases, if not all. Aims 1) To find out the prevalence of clinically diagnosed oral white, non scrapable lesions. 2) To find out the prevalence of histopathologically diagnosed oral white, non scrapable lesions. 3) To correlate the clinical and histopathological diagnosis in the above lesions. Materials and Methods A total of 100 cases of oral white, non scrapable lesions were included in the study. Based on their history and clinical presentation, clinical provisional diagnosis was made. Then biopsy was done and confirmatory histopathological diagnosis was given and both were correlated. In order to correlate clinical and histopathological diagnosis Discrepancy Index (DI) was calculated for all the cases. Results Based on clinical diagnosis, there were 59 cases (59%) of leukoplakia, 29 cases (29%) of lichen planus and six cases (6%) of lichenoid reaction; whereas, based on histopathological diagnosis, there were 66 cases (66%) of leukoplakia epithelial hyperplasia and hyperkeratosis (leukoplakia) and 30 cases (30%) of lichen planus. Seventy eight clinically diagnosed cases (78%) correlated with the histopathological diagnosis and 22 cases (22%) did not correlate. The total discrepancy index was 22%. Conclusion A clinician needs to be aware of oral white, non scrapable lesions. Due to the overlapping of many clinical features in some of these lesions and also due to their malignant potential, a histopathological confirmative diagnosis is recommended. PMID:27042583

  18. Healthcare cost savings estimator tool for chronic disease self-management program: a new tool for program administrators and decision makers.

    PubMed

    Ahn, SangNam; Smith, Matthew Lee; Altpeter, Mary; Post, Lindsey; Ory, Marcia G

    2015-01-01

    Chronic disease self-management education (CDSME) programs have been delivered to more than 100,000 older Americans with chronic conditions. As one of the Stanford suite of evidence-based CDSME programs, the chronic disease self-management program (CDSMP) has been disseminated in diverse populations and settings. The objective of this paper is to introduce a practical, universally applicable tool to assist program administrators and decision makers plan implementation efforts and make the case for continued program delivery. This tool was developed utilizing data from a recent National Study of CDSMP to estimate national savings associated with program participation. Potential annual healthcare savings per CDSMP participant were calculated based on averted emergency room visits and hospitalizations. While national data can be utilized to estimate cost savings, the tool has built-in features allowing users to tailor calculations based on their site-specific data. Building upon the National Study of CDSMP's documented potential savings of $3.3 billion in healthcare costs by reaching 5% of adults with one or more chronic conditions, two heuristic case examples were also explored based on different population projections. The case examples show how a small county and large metropolitan city were not only able to estimate healthcare savings ($38,803 for the small county; $732,290 for the large metropolitan city) for their existing participant populations but also to project significant healthcare savings if they plan to reach higher proportions of middle-aged and older adults. Having a tool to demonstrate the monetary value of CDSMP can contribute to the ongoing dissemination and sustainability of such community-based interventions. Next steps will be creating a user-friendly, internet-based version of Healthcare Cost Savings Estimator Tool: CDSMP, followed by broadening the tool to consider cost savings for other evidence-based programs.

  19. Calculations of Nuclear Astrophysics and Californium Fission Neutron Spectrum Averaged Cross Section Uncertainties Using ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0 and Low-fidelity Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B., E-mail: pritychenko@bnl.gov

    Nuclear astrophysics and californium fission neutron spectrum averaged cross sections and their uncertainties for ENDF materials have been calculated. Absolute values were deduced with Maxwellian and Mannhart spectra, while uncertainties are based on ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0 and Low-Fidelity covariances. These quantities are compared with available data, independent benchmarks, EXFOR library, and analyzed for a wide range of cases. Recommendations for neutron cross section covariances are given and implications are discussed.

  20. Theoretical modelling of AFM for bimetallic tip-substrate interactions

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John

    1991-01-01

    Recently, a new technique for calculating the defect energetics of alloys based on Equivalent Crystal Theory was developed. This new technique successfully predicts the bulk properties for binary alloys as well as segregation energies in the dilute limit. The authors apply this limit for the calculation of energy and force as a function of separation of an atomic force microscope (AFM) tip and substrate. The study was done for different combinations of tip and sample materials. The validity of the universality discovered for the same metal interfaces is examined for the case of different metal interactions.

  1. Resistive wall wakefields of short bunches at cryogenic temperatures

    DOE PAGES

    Stupakov, G.; Bane, K. L. F.; Emma, P.; ...

    2015-03-19

    In this study, we present calculations of the longitudinal wakefields at cryogenic temperatures for extremely short bunches, characteristic for modern x-ray free electron lasers. The calculations are based on the equations for the surface impedance in the regime of the anomalous skin effect in metals. This paper extends and complements an earlier analysis of B. Podobedov, Phys. Rev. ST Accel. Beams 12, 044401 (2009). into the region of very high frequencies associated with bunch lengths in the micron range. We study in detail the case of a rectangular bunch distribution for parameters of interest of LCLS-II with a superconducting undulator.

  2. Unified Description of Inelastic Propensity Rules for Electron Transport through Nanoscale Junctions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Ueba, Hiromu; Lorente, Nicolás; Brandbyge, Mads

    2008-06-01

    We present a method to analyze the results of first-principles based calculations of electronic currents including inelastic electron-phonon effects. This method allows us to determine the electronic and vibrational symmetries in play, and hence to obtain the so-called propensity rules for the studied systems. We show that only a few scattering states—namely those belonging to the most transmitting eigenchannels—need to be considered for a complete description of the electron transport. We apply the method on first-principles calculations of four different systems and obtain the propensity rules in each case.

  3. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    PubMed

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.

  4. Fully automated lobe-based airway taper index calculation in a low dose MDCT CF study over 4 time-points

    NASA Astrophysics Data System (ADS)

    Weinheimer, Oliver; Wielpütz, Mark O.; Konietzke, Philip; Heussel, Claus P.; Kauczor, Hans-Ulrich; Brochhausen, Christoph; Hollemann, David; Savage, Dasha; Galbán, Craig J.; Robinson, Terry E.

    2017-02-01

    Cystic Fibrosis (CF) results in severe bronchiectasis in nearly all cases. Bronchiectasis is a disease where parts of the airways are permanently dilated. The development and the progression of bronchiectasis is not evenly distributed over the entire lungs - rather, individual functional units are affected differently. We developed a fully automated method for the precise calculation of lobe-based airway taper indices. To calculate taper indices, some preparatory algorithms are needed. The airway tree is segmented, skeletonized and transformed to a rooted acyclic graph. This graph is used to label the airways. Then a modified version of the previously validated integral based method (IBM) for airway geometry determination is utilized. The rooted graph, the airway lumen and wall information are then used to calculate the airway taper indices. Using a computer-generated phantom simulating 10 cross sections of airways we present results showing a high accuracy of the modified IBM. The new taper index calculation method was applied to 144 volumetric inspiratory low-dose MDCT scans. The scans were acquired from 36 children with mild CF at 4 time-points (baseline, 3 month, 1 year, 2 years). We found a moderate correlation with the visual lobar Brody bronchiectasis scores by three raters (r2 = 0.36, p < .0001). The taper index has the potential to be a precise imaging biomarker but further improvements are needed. In combination with other imaging biomarkers, taper index calculation can be an important tool for monitoring the progression and the individual treatment of patients with bronchiectasis.

  5. Polarimetric signatures of a canopy of dielectric cylinders based on first and second order vector radiative transfer theory

    NASA Technical Reports Server (NTRS)

    Tsang, Leung; Chan, Chi Hou; Kong, Jin AU; Joseph, James

    1992-01-01

    Complete polarimetric signatures of a canopy of dielectric cylinders overlying a homogeneous half space are studied with the first and second order solutions of the vector radiative transfer theory. The vector radiative transfer equations contain a general nondiagonal extinction matrix and a phase matrix. The energy conservation issue is addressed by calculating the elements of the extinction matrix and the elements of the phase matrix in a manner that is consistent with energy conservation. Two methods are used. In the first method, the surface fields and the internal fields of the dielectric cylinder are calculated by using the fields of an infinite cylinder. The phase matrix is calculated and the extinction matrix is calculated by summing the absorption and scattering to ensure energy conservation. In the second method, the method of moments is used to calculate the elements of the extinction and phase matrices. The Mueller matrix based on the first order and second order multiple scattering solutions of the vector radiative transfer equation are calculated. Results from the two methods are compared. The vector radiative transfer equations, combined with the solution based on method of moments, obey both energy conservation and reciprocity. The polarimetric signatures, copolarized and depolarized return, degree of polarization, and phase differences are studied as a function of the orientation, sizes, and dielectric properties of the cylinders. It is shown that second order scattering is generally important for vegetation canopy at C band and can be important at L band for some cases.

  6. Empirical Estimation of Local Dielectric Constants: Toward Atomistic Design of Collagen Mimetic Peptides

    PubMed Central

    Pike, Douglas H.; Nanda, Vikas

    2017-01-01

    One of the key challenges in modeling protein energetics is the treatment of solvent interactions. This is particularly important in the case of peptides, where much of the molecule is highly exposed to solvent due to its small size. In this study, we develop an empirical method for estimating the local dielectric constant based on an additive model of atomic polarizabilities. Calculated values match reported apparent dielectric constants for a series of Staphylococcus aureus nuclease mutants. Calculated constants are used to determine screening effects on Coulombic interactions and to determine solvation contributions based on a modified Generalized Born model. These terms are incorporated into the protein modeling platform protCAD, and benchmarked on a data set of collagen mimetic peptides for which experimentally determined stabilities are available. Computing local dielectric constants using atomistic protein models and the assumption of additive atomic polarizabilities is a rapid and potentially useful method for improving electrostatics and solvation calculations that can be applied in the computational design of peptides. PMID:25784456

  7. A global reaction route mapping-based kinetic Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Mitchell, Izaac; Irle, Stephan; Page, Alister J.

    2016-07-01

    We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.

  8. Accurate pKa calculation of the conjugate acids of alkanolamines, alkaloids and nucleotide bases by quantum chemical methods.

    PubMed

    Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han

    2013-04-02

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A global reaction route mapping-based kinetic Monte Carlo algorithm.

    PubMed

    Mitchell, Izaac; Irle, Stephan; Page, Alister J

    2016-07-14

    We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.

  10. Model of coordination melting of crystals and anisotropy of physical and chemical properties of the surface

    NASA Astrophysics Data System (ADS)

    Bokarev, Valery P.; Krasnikov, Gennady Ya

    2018-02-01

    Based on the evaluation of the properties of crystals, such as surface energy and its anisotropy, the surface melting temperature, the anisotropy of the work function of the electron, and the anisotropy of adsorption, were shown the advantages of the model of coordination melting (MCM) in calculating the surface properties of crystals. The model of coordination melting makes it possible to calculate with an acceptable accuracy the specific surface energy of the crystals, the anisotropy of the surface energy, the habit of the natural crystals, the temperature of surface melting of the crystal, the anisotropy of the electron work function and the anisotropy of the adhesive properties of single-crystal surfaces. The advantage of our model is the simplicity of evaluating the surface properties of the crystal based on the data given in the reference literature. In this case, there is no need for a complex mathematical tool, which is used in calculations using quantum chemistry or modeling by molecular dynamics.

  11. Anisotropy of the angular distribution of fission fragments in heavy-ion fusion-fission reactions: The influence of the level-density parameter and the neck thickness

    NASA Astrophysics Data System (ADS)

    Naderi, D.; Pahlavani, M. R.; Alavi, S. A.

    2013-05-01

    Using the Langevin dynamical approach, the neutron multiplicity and the anisotropy of angular distribution of fission fragments in heavy ion fusion-fission reactions were calculated. We applied one- and two-dimensional Langevin equations to study the decay of a hot excited compound nucleus. The influence of the level-density parameter on neutron multiplicity and anisotropy of angular distribution of fission fragments was investigated. We used the level-density parameter based on the liquid drop model with two different values of the Bartel approach and Pomorska approach. Our calculations show that the anisotropy and neutron multiplicity are affected by level-density parameter and neck thickness. The calculations were performed on the 16O+208Pb and 20Ne+209Bi reactions. Obtained results in the case of the two-dimensional Langevin with a level-density parameter based on Bartel and co-workers approach are in better agreement with experimental data.

  12. Exploring the relation between online case-based discussions and learning outcomes in dental education.

    PubMed

    Koole, Sebastiaan; Vervaeke, Stijn; Cosyn, Jan; De Bruyn, Hugo

    2014-11-01

    Online case-based discussions, parallel to theoretical dental education, have been highly valued by students and supervisors. This study investigated the relation between variables of online group discussions and learning outcomes. At Ghent University in Belgium, undergraduate dental students (years two and three) are required to participate in online case-based discussion groups (five students/group) in conjunction with two theoretical courses on basic periodontics and related therapy. Each week, a patient case is discussed under supervision of a periodontist, who authored the case and performed the treatment. Each case includes treatment history and demand, intra- and extraoral images, and full diagnostic information with periodontal and radiographic status. For this retrospective study, data were obtained for all 252 students in forty-three discussion groups between 2009 and 2012. Spearman's rank correlations were calculated to investigate the relation among group dynamics (number of group posts and views), individual student contributions (number of individual posts, newly introduced elements, questions, and reactions to other posts), supervisors' interventions (number of posts and posed questions), and learning outcomes (examination result). The results showed that learning outcomes were significantly related to the number of student posts (Spearman's rho (ρ)=0.19), newly introduced elements (ρ=0.21), reactions to other posts (ρ=0.14), number of supervisors' interventions (ρ=0.12), and supervisors' questions (ρ=0.20). These results suggest that individual student contributions during online case-based discussions and the provided supervision were related to learning outcomes.

  13. SU-F-J-186: Enabling Adaptive IMPT with CBCT-Based Dose Recalculation for H&N and Prostate Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurz, C; LMU Munich, Munich; Park, Y

    2016-06-15

    Purpose: To enable adaptive intensity modulated proton therapy for sites sensitive to inter-fractional changes on the basis of accurate CBCT-based proton dose calculations. To this aim two CBCT intensity correction methods are considered: planning CT (pCT) to CBCT DIR and projection correction based on pCT DIR prior. Methods: 3 H&N and 3 prostate cancer patients with CBCT images and corresponding projections were used in this study, in addition to pCT and re-planning CT (rpCT) images (H&N only). A virtual CT (vCT) was generated by pCT to CBCT DIR. In a second approach, the vCT was used as prior for scattermore » correction of the CBCT projections to yield a CBCTcor image. BEV 2D range maps of SFUD IMPT plans were compared. For the prostate cases, the geometric accuracy of the vCT was also evaluated by contour comparison to physician delineation of the CBCTcor and original CBCT. Results: SFUD dose calculations on vCT and CBCTcor were found to be within 3mm for 97% to 99% of 2D range maps. Median range differences compared to rpCT were below 0.5mm. Analysis showed that the DIR-based vCT approach exhibits inaccuracies in the pelvic region due to the very low soft-tissue contrast in the CBCT. The CBCTcor approach yielded results closer to the original CBCT in terms of DICE coefficients than the vCT (median 0.91 vs 0.81) for targets and OARs. In general, the CBCTcor approach was less affected by inaccuracies of the DIR used during the generation of the vCT prior. Conclusion: Both techniques yield 3D CBCT images with intensities equivalent to diagnostic CT and appear suitable for IMPT dose calculation for most sites. For H&N cases, no considerable differences between the two techniques were found, while improved results of the CBCTcor were observed for pelvic cases due to the reduced sensitivity to registration inaccuracies. Deutsche Forschungsgemeinschaft (MAP); Bundesministerium fur Bildung und Forschung (01IB13001)« less

  14. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  15. Soft evolution of multi-jet final states

    DOE PAGES

    Gerwick, Erik; Schumann, Steffen; Höche, Stefan; ...

    2015-02-16

    We present a new framework for computing resummed and matched distributions in processes with many hard QCD jets. The intricate color structure of soft gluon emission at large angles renders resummed calculations highly non-trivial in this case. We automate all ingredients necessary for the color evolution of the soft function at next-to-leading-logarithmic accuracy, namely the selection of the color bases and the projections of color operators and Born amplitudes onto those bases. Explicit results for all QCD processes with up to 2 → 5 partons are given. We also devise a new tree-level matching scheme for resummed calculations which exploitsmore » a quasi-local subtraction based on the Catani-Seymour dipole formalism. We implement both resummation and matching in the Sherpa event generator. As a proof of concept, we compute the resummed and matched transverse-thrust distribution for hadronic collisions.« less

  16. Is introducing rapid culture into the diagnostic algorithm of smear-negative tuberculosis cost-effective?

    PubMed

    Yakhelef, N; Audibert, M; Varaine, F; Chakaya, J; Sitienei, J; Huerga, H; Bonnet, M

    2014-05-01

    In 2007, the World Health Organization recommended introducing rapid Mycobacterium tuberculosis culture into the diagnostic algorithm of smear-negative pulmonary tuberculosis (TB). To assess the cost-effectiveness of introducing a rapid non-commercial culture method (thin-layer agar), together with Löwenstein-Jensen culture to diagnose smear-negative TB at a district hospital in Kenya. Outcomes (number of true TB cases treated) were obtained from a prospective study evaluating the effectiveness of a clinical and radiological algorithm (conventional) against the alternative algorithm (conventional plus M. tuberculosis culture) in 380 smear-negative TB suspects. The costs of implementing each algorithm were calculated using a 'micro-costing' or 'ingredient-based' method. We then compared the cost and effectiveness of conventional vs. culture-based algorithms and estimated the incremental cost-effectiveness ratio. The costs of conventional and culture-based algorithms per smear-negative TB suspect were respectively €39.5 and €144. The costs per confirmed and treated TB case were respectively €452 and €913. The culture-based algorithm led to diagnosis and treatment of 27 more cases for an additional cost of €1477 per case. Despite the increase in patients started on treatment thanks to culture, the relatively high cost of a culture-based algorithm will make it difficult for resource-limited countries to afford.

  17. Temporal correlation functions of concentration fluctuations: an anomalous case.

    PubMed

    Lubelski, Ariel; Klafter, Joseph

    2008-10-09

    We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.

  18. Hydrostatic Equilibria of Rotating Stars with Realistic Equation of State

    NASA Astrophysics Data System (ADS)

    Yasutake, Nobutoshi; Fujisawa, Kotaro; Okawa, Hirotada; Yamada, Shoichi

    Stars rotate generally, but it is a non-trivial issue to obtain hydrostatic equilibria for rapidly rotating stars theoretically, especially for baroclinic cases, in which the pressure depends not only on the density, but also on the temperature and compositions. It is clear that the stellar structures with realistic equation of state are the baroclinic cases, but there are not so many studies for such equilibria. In this study, we propose two methods to obtain hydrostatic equilibria considering rotation and baroclinicity, namely the weak-solution method and the strong-solution method. The former method is based on the variational principle, which is also applied to the calculation of the inhomogeneous phases, known as the pasta structures, in crust of neutron stars. We found this method might break the balance equation locally, then introduce the strong-solution method. Note that our method is formulated in the mass coordinate, and it is hence appropriated for the stellar evolution calculations.

  19. Elastic-plastic finite-element analyses of thermally cycled double-edge wedge specimens

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Hunt, L. E.

    1982-01-01

    Elastic-plastic stress-strain analyses were performed for double-edge wedge specimens subjected to thermal cycling in fluidized beds at 316 and 1088 C. Four cases involving different nickel-base alloys (IN 100, Mar M-200, NASA TAZ-8A, and Rene 80) were analyzed by using the MARC nonlinear, finite element computer program. Elastic solutions from MARC showed good agreement with previously reported solutions obtained by using the NASTRAN and ISO3DQ computer programs. Equivalent total strain ranges at the critical locations calculated by elastic analyses agreed within 3 percent with those calculated from elastic-plastic analyses. The elastic analyses always resulted in compressive mean stresses at the critical locations. However, elastic-plastic analyses showed tensile mean stresses for two of the four alloys and an increase in the compressive mean stress for the highest plastic strain case.

  20. Development of automatic visceral fat volume calculation software for CT volume data.

    PubMed

    Nemoto, Mitsutaka; Yeernuer, Tusufuhan; Masutani, Yoshitaka; Nomura, Yukihiro; Hanaoka, Shouhei; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni

    2014-01-01

    To develop automatic visceral fat volume calculation software for computed tomography (CT) volume data and to evaluate its feasibility. A total of 24 sets of whole-body CT volume data and anthropometric measurements were obtained, with three sets for each of four BMI categories (under 20, 20 to 25, 25 to 30, and over 30) in both sexes. True visceral fat volumes were defined on the basis of manual segmentation of the whole-body CT volume data by an experienced radiologist. Software to automatically calculate visceral fat volumes was developed using a region segmentation technique based on morphological analysis with CT value threshold. Automatically calculated visceral fat volumes were evaluated in terms of the correlation coefficient with the true volumes and the error relative to the true volume. Automatic visceral fat volume calculation results of all 24 data sets were obtained successfully and the average calculation time was 252.7 seconds/case. The correlation coefficients between the true visceral fat volume and the automatically calculated visceral fat volume were over 0.999. The newly developed software is feasible for calculating visceral fat volumes in a reasonable time and was proved to have high accuracy.

  1. The influence of anharmonic and solvent effects on the theoretical vibrational spectra of the guanine-cytosine base pairs in Watson-Crick and Hoogsteen configurations.

    PubMed

    Bende, Attila; Muntean, Cristina M

    2014-03-01

    The theoretical IR and Raman spectra of the guanine-cytosine DNA base pairs in Watson-Crick and Hoogsteen configurations were computed using DFT method with M06-2X meta-hybrid GGA exchange-correlation functional, including the anharmonic corrections and solvent effects. The results for harmonic frequencies and their anharmonic corrections were compared with our previously calculated values obtained with the B3PW91 hybrid GGA functional. Significant differences were obtained for the anharmonic corrections calculated with the two different DFT functionals, especially for the stretching modes, while the corresponding harmonic frequencies did not differ considerable. For the Hoogtseen case the H⁺ vibration between the G-C base pair can be characterized as an asymmetric Duffing oscillator and therefore unrealistic anharmonic corrections for normal modes where this proton vibration is involved have been obtained. The spectral modification due to the anharmonic corrections, solvent effects and the influence of sugar-phosphate group for the Watson-Crick and Hoogsteen base pair configurations, respectively, were also discussed. For the Watson-Crick case also the influence of the stacking interaction on the theoretical IR and Raman spectra was analyzed. Including the anharmonic correction in our normal mode analysis is essential if one wants to obtain correct assignments of the theoretical frequency values as compared with the experimental spectra.

  2. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Determination of Irreducible Water Saturation from nuclear magnetic resonance based on fractal theory — a case study of sandstone with complex pore structure

    NASA Astrophysics Data System (ADS)

    Peng, L.; Pan, H.; Ma, H.; Zhao, P.; Qin, R.; Deng, C.

    2017-12-01

    The irreducible water saturation (Swir) is a vital parameter for permeability prediction and original oil and gas estimation. However, the complex pore structure of the rocks makes the parameter difficult to be calculated from both laboratory and conventional well logging methods. In this study, an effective statistical method to predict Swir is derived directly from nuclear magnetic resonance (NMR) data based on fractal theory. The spectrum of transversal relaxation time (T2) is normally considered as an indicator of pore size distribution, and the micro- and meso-pore's fractal dimension in two specific range of T2 spectrum distribution are calculated. Based on the analysis of the fractal characteristics of 22 core samples, which were drilled from four boreholes of tight lithologic oil reservoirs of Ordos Basin in China, the positive correlation between Swir and porosity is derived. Afterwards a predicting model for Swir based on linear regressions of fractal dimensions is proposed. It reveals that the Swir is controlled by the pore size and the roughness of the pore. The reliability of this model is tested and an ideal consistency between predicted results and experimental data is found. This model is a reliable supplementary to predict the irreducible water saturation in the case that T2 cutoff value cannot be accurately determined.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nayak, Vikas; Verma, U. P.

    Quantum mechanical based first principle calculations have been employed to obtain the unit cell lattice parameters of mercury thiogallate (HgGa{sub 2}S{sub 4}) in defect stannite structure for the first time. For this, we treated HgGa{sub 2}S{sub 4} in two different types of site symmetries in the same space group. In both the cases obtained unit cell parameters are same, which shows the accuracy of present approach. The electronic band structures show the semiconducting behavior in both the cases. The density of states plot are also studied and discussed.

  5. [Calculation of standardised unit costs from a societal perspective for health economic evaluation].

    PubMed

    Bock, J-O; Brettschneider, C; Seidl, H; Bowles, D; Holle, R; Greiner, W; König, H H

    2015-01-01

    Due to demographic aging, economic evaluation of health care technologies for the elderly becomes more important. A standardised questionnaire to measure the health-related resource utilisation has been designed. The monetary valuation of the resource use documented by the questionnaire is a central step towards the determination of the corresponding costs. The aim of this paper is to provide unit costs for the resources in the questionnaire from a societal perspective. The unit costs are calculated pragmatically based on regularly published sources. Thus, an easy update is possible. This paper presents the calculated unit costs for outpatient medical care, inpatient care, informal and formal nursing care and pharmaceuticals from a societal perspective. The calculated unit costs can serve as a reference case in health economic evaluations and hence help to increase their comparability. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Geometrical optics approach in liquid crystal films with three-dimensional director variations.

    PubMed

    Panasyuk, G; Kelly, J; Gartland, E C; Allender, D W

    2003-04-01

    A formal geometrical optics approach (GOA) to the optics of nematic liquid crystals whose optic axis (director) varies in more than one dimension is described. The GOA is applied to the propagation of light through liquid crystal films whose director varies in three spatial dimensions. As an example, the GOA is applied to the calculation of light transmittance for the case of a liquid crystal cell which exhibits the homeotropic to multidomainlike transition (HMD cell). Properties of the GOA solution are explored, and comparison with the Jones calculus solution is also made. For variations on a smaller scale, where the Jones calculus breaks down, the GOA provides a fast, accurate method for calculating light transmittance. The results of light transmittance calculations for the HMD cell based on the director patterns provided by two methods, direct computer calculation and a previously developed simplified model, are in good agreement.

  7. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  8. Methyl group dynamics in paracetamol and acetanilide: probing the static properties of intermolecular hydrogen bonds formed by peptide groups

    NASA Astrophysics Data System (ADS)

    Johnson, M. R.; Prager, M.; Grimm, H.; Neumann, M. A.; Kearley, G. J.; Wilson, C. C.

    1999-06-01

    Measurements of tunnelling and librational excitations for the methyl group in paracetamol and tunnelling excitations for the methyl group in acetanilide are reported. In both cases, results are compared with molecular mechanics calculations, based on the measured low temperature crystal structures, which follow an established recipe. Agreement between calculated and measured methyl group observables is not as good as expected and this is attributed to the presence of comprehensive hydrogen bond networks formed by the peptide groups. Good agreement is obtained with a periodic quantum chemistry calculation which uses density functional methods, these calculations confirming the validity of the one-dimensional rotational model used and the crystal structures. A correction to the Coulomb contribution to the rotational potential in the established recipe using semi-emipircal quantum chemistry methods, which accommodates the modified charge distribution due to the hydrogen bonds, is investigated.

  9. Case Study: Calculating the Ecological Footprint of the 2004 Australian Association for Environmental Education (AAEE) Biennial Conference

    ERIC Educational Resources Information Center

    Rickard, Andrew

    2006-01-01

    Event tourism is accompanied by social, economic and environmental benefits and costs. The assessment of this form of tourism has however largely focused on the social and economic perspectives, while environmental assessments have been bound to a destination-based approach. The application of the Ecological Footprint methodology allows for these…

  10. Transire, a Program for Generating Solid-State Interface Structures

    DTIC Science & Technology

    2017-09-14

    function-based electron transport property calculator. Three test cases are presented to demonstrate the usage of Transire: the misorientation of the...graphene bilayer, the interface energy as a function of misorientation of copper grain boundaries, and electron transport transmission across the...gallium nitride/silicon carbide interface. 15. SUBJECT TERMS crystalline interface, electron transport, python, computational chemistry, grain boundary

  11. Feasibility study of shell buckling analysis using the modified structure method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.; Haftka, R. T.

    1972-01-01

    The modified structure method, which is based on Koiter's theory of imperfections, was used to calculate approximate buckling loads of several shells of revolution. The method does not appear to be practical for shells because, in many cases, the prebuckling nonlinearity may be too large to be treated accurately as a small imperfection.

  12. PBEQ-Solver for online visualization of electrostatic potential of biomolecules.

    PubMed

    Jo, Sunhwan; Vargyas, Miklos; Vasko-Szedlar, Judit; Roux, Benoît; Im, Wonpil

    2008-07-01

    PBEQ-Solver provides a web-based graphical user interface to read biomolecular structures, solve the Poisson-Boltzmann (PB) equations and interactively visualize the electrostatic potential. PBEQ-Solver calculates (i) electrostatic potential and solvation free energy, (ii) protein-protein (DNA or RNA) electrostatic interaction energy and (iii) pKa of a selected titratable residue. All the calculations can be performed in both aqueous solvent and membrane environments (with a cylindrical pore in the case of membrane). PBEQ-Solver uses the PBEQ module in the biomolecular simulation program CHARMM to solve the finite-difference PB equation of molecules specified by users. Users can interactively inspect the calculated electrostatic potential on the solvent-accessible surface as well as iso-electrostatic potential contours using a novel online visualization tool based on MarvinSpace molecular visualization software, a Java applet integrated within CHARMM-GUI (http://www.charmm-gui.org). To reduce the computational time on the server, and to increase the efficiency in visualization, all the PB calculations are performed with coarse grid spacing (1.5 A before and 1 A after focusing). PBEQ-Solver suggests various physical parameters for PB calculations and users can modify them if necessary. PBEQ-Solver is available at http://www.charmm-gui.org/input/pbeqsolver.

  13. Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.

    2000-06-01

    The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.

  14. Non-invasive fetal sex determination by maternal plasma sequencing and application in X-linked disorder counseling.

    PubMed

    Pan, Xiaoyu; Zhang, Chunlei; Li, Xuchao; Chen, Shengpei; Ge, Huijuan; Zhang, Yanyan; Chen, Fang; Jiang, Hui; Jiang, Fuman; Zhang, Hongyun; Wang, Wei; Zhang, Xiuqing

    2014-12-01

    To develop a fetal sex determination method based on maternal plasma sequencing (MPS), assess its performance and potential use in X-linked disorder counseling. 900 cases of MPS data from a previous study were reviewed, in which 100 and 800 cases were used as training and validation set, respectively. The percentage of uniquely mapped sequencing reads on Y chromosome was calculated and used to classify male and female cases. Eight pregnant women who are carriers of Duchenne muscular dystrophy (DMD) mutations were recruited, whose plasma were subjected to multiplex sequencing and fetal sex determination analysis. In the training set, a sensitivity of 96% and false positive rate of 0% for male cases detection were reached in our method. The blinded validation results showed 421 in 423 male cases and 374 in 377 female cases were successfully identified, revealing sensitivity and specificity of 99.53% and 99.20% for fetal sex determination, at as early as 12 gestational weeks. Fetal sex for all eight DMD genetic counseling cases were correctly identified, which were confirmed by amniocentesis. Based on MPS, high accuracy of non-invasive fetal sex determination can be achieved. This method can potentially be used for prenatal genetic counseling.

  15. Quantitative assessment of building fire risk to life safety.

    PubMed

    Guanquan, Chu; Jinhua, Sun

    2008-06-01

    This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.

  16. One-shot calculation of temperature-dependent optical spectra and phonon-induced band-gap renormalization

    NASA Astrophysics Data System (ADS)

    Zacharias, Marios; Giustino, Feliciano

    Electron-phonon interactions are of fundamental importance in the study of the optical properties of solids at finite temperatures. Here we present a new first-principles computational technique based on the Williams-Lax theory for performing predictive calculations of the optical spectra, including quantum zero-point renormalization and indirect absorption. The calculation of the Williams-Lax optical spectra is computationally challenging, as it involves the sampling over all possible nuclear quantum states. We develop an efficient computational strategy for performing ''one-shot'' finite-temperature calculations. These require only a single optimal configuration of the atomic positions. We demonstrate our methodology for the case of Si, C, and GaAs, yielding absorption coefficients in good agreement with experiment. This work opens the way for systematic calculations of optical spectra at finite temperature. This work was supported by the UK EPSRC (EP/J009857/1 and EP/M020517/) and the Leverhulme Trust (RL-2012-001), and the Graphene Flagship (EU-FP7-604391).

  17. Modeling of the metallic port in breast tissue expanders for photon radiotherapy.

    PubMed

    Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui

    2018-03-30

    The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. QSPR models for various physical properties of carbohydrates based on molecular mechanics and quantum chemical calculations.

    PubMed

    Dyekjaer, Jane Dannow; Jónsdóttir, Svava Osk

    2004-01-22

    Quantitative Structure-Property Relationships (QSPR) have been developed for a series of monosaccharides, including the physical properties of partial molar heat capacity, heat of solution, melting point, heat of fusion, glass-transition temperature, and solid state density. The models were based on molecular descriptors obtained from molecular mechanics and quantum chemical calculations, combined with other types of descriptors. Saccharides exhibit a large degree of conformational flexibility, therefore a methodology for selecting the energetically most favorable conformers has been developed, and was used for the development of the QSPR models. In most cases good correlations were obtained for monosaccharides. For five of the properties predictions were made for disaccharides, and the predicted values for the partial molar heat capacities were in excellent agreement with experimental values.

  19. Alternative power supply systems for remote industrial customers

    NASA Astrophysics Data System (ADS)

    Kharlamova, N. V.; Khalyasmaa, A. I.; Eroshenko, S. A.

    2017-06-01

    The paper addresses the problem of alternative power supply of remote industrial clusters with renewable electric energy generation. As a result of different technologies comparison, consideration is given to wind energy application. The authors present a methodology of mean expected wind generation output calculation, based on Weibull distribution, which provides an effective express-tool for preliminary assessment of required installed generation capacity. The case study is based on real data including database of meteorological information, relief characteristics, power system topology etc. Wind generation feasibility estimation for a specific territory is followed by power flow calculations using Monte Carlo methodology. Finally, the paper provides a set of recommendations to ensure safe and reliable power supply for the final customers and, subsequently, to provide sustainable development of the regions, located far from megalopolises and industrial centres.

  20. [FQA: A method for floristic quality assessment based on conservatism of plant species].

    PubMed

    Cao, Li Juan; He, Ping; Wang, Mi; Xui, Jie; Ren, Ying

    2018-04-01

    FQA, which uses the conservatism of plant species for particular habitats and the species richness of plant communities, is a rapid method for the assessment of habitat quality. This method is based on species composition of quadrats and coefficients of conservatism for species which assigned by experts. Floristic Quality Index (FQI) that reflects vegetation integrity and degradation of a site can be calculated by a simple formula and be used for space-time comparison of habitat quality. It has been widely used in more than ten countries including the United States and Canada. This paper presented the principle, calculation formulas and application cases of this method, with the aim to provide a simple, repeatable and comparable method to assess habitat quality for ecological managers and researchers.

  1. Statin-Associated Polymyalgia Rheumatica. An Analysis Using WHO Global Individual Case Safety Database: A Case/Non-Case Approach

    PubMed Central

    de Jong, Hilda J. I.; Saldi, Siti R. F.; Klungel, Olaf H.; Vandebriel, Rob J.; Souverein, Patrick C.; Meyboom, Ronald H. B.; Passier, J. L. M. (Anneke); van Loveren, Henk; Tervaert, Jan Willem Cohen

    2012-01-01

    Objective To assess whether there is an association between statin use and the occurrence of polymyalgia rheumatic (PMR) in the spontaneous reporting database of the World Health Organisation (WHO). Methods We conducted a case/non-case study based on individual case safety reports (ICSR) in the WHO global ICSR database (VigiBase). Case reports containing the adverse event term polymyalgia rheumatica (WHOART or MedDRA Preferred Term) were defined as cases. Non-cases were all case reports containing other adverse event terms. Each case was matched to five non-cases by age, gender, and time of reporting. Case reports regarding a statin as suspected or concomitant drug were identified using the Anatomical Therapeutic Chemical (ATC) classification. Multivariate logistic regression was used to calculate reporting odds ratios (RORs) with 95% confidence intervals (CI). Results We identified 327 reports of PMR as cases and 1635 reports of other ADRs as non-cases. Among cases, statins were more frequently reported as suspected agent (29.4%) compared to non-cases (2.9%). After adjustment for several covariates, statins were significantly associated with reports of PMR (ROR 14.21; 95% CI 9.89–20.85). Conclusion The results of this study lends support to previous anecdotal case reports in the literature suggesting that the use of a statin may be associated with the occurrence of PMR. Further studies are needed to study the strength of the association in more detail and to elucidate the underlying mechanism. PMID:22844450

  2. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  3. The component content of active particles in a plasma-chemical reactor based on volume barrier discharge

    NASA Astrophysics Data System (ADS)

    Soloshenko, I. A.; Tsiolko, V. V.; Pogulay, S. S.; Terent'yeva, A. G.; Bazhenov, V. Yu; Shchedrin, A. I.; Ryabtsev, A. V.; Kuzmichev, A. I.

    2007-02-01

    In this paper the results of theoretical and experimental studies of the component content of active particles formed in a plasma-chemical reactor composed of a multiple-cell generator of active particles, based on volume barrier discharge, and a working chamber are presented. For calculation of the content of uncharged plasma components an approach is proposed which is based on averaging of the power introduced over the entire volume. Advantages of such an approach lie in an absence of fitting parameters, such as the dimensions of microdischarges, their surface density and rate of breakdown. The calculation and the experiment were accomplished with the use of dry air (20% relative humidity) as the plasma generating medium. Concentrations of O3, HNO3, HNO2, N2 O5 and NO3 were measured experimentally in the discharge volume and working chamber for the residence time of particles on a discharge of 0.3 s and more and discharge specific power of 1.5 W cm-3. It has been determined that the best agreement between the calculation and the experiment occurs at calculated gas medium temperatures in the discharge plasma of about 400-425 K, which correspond to the experimentally measured rotational temperature of nitrogen. In most cases the calculated concentrations of O3, HNO3, HNO2, N2O5 and NO3 for the barrier discharge and the working chamber are in fairly good agreement with the respective measured values.

  4. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses.

    PubMed

    Faith, Daniel P

    2015-02-19

    The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Precision bounds for gradient magnetometry with atomic ensembles

    NASA Astrophysics Data System (ADS)

    Apellaniz, Iagoba; Urizar-Lanz, Iñigo; Zimborás, Zoltán; Hyllus, Philipp; Tóth, Géza

    2018-05-01

    We study gradient magnetometry with an ensemble of atoms with arbitrary spin. We calculate precision bounds for estimating the gradient of the magnetic field based on the quantum Fisher information. For quantum states that are invariant under homogeneous magnetic fields, we need to measure a single observable to estimate the gradient. On the other hand, for states that are sensitive to homogeneous fields, a simultaneous measurement is needed, as the homogeneous field must also be estimated. We prove that for the cases studied in this paper, such a measurement is feasible. We present a method to calculate precision bounds for gradient estimation with a chain of atoms or with two spatially separated atomic ensembles. We also consider a single atomic ensemble with an arbitrary density profile, where the atoms cannot be addressed individually, and which is a very relevant case for experiments. Our model can take into account even correlations between particle positions. While in most of the discussion we consider an ensemble of localized particles that are classical with respect to their spatial degree of freedom, we also discuss the case of gradient metrology with a single Bose-Einstein condensate.

  6. One-carbon metabolite ratios as functional B-vitamin markers and in relation to colorectal cancer risk.

    PubMed

    Gylling, Björn; Myte, Robin; Ulvik, Arve; Ueland, Per M; Midttun, Øivind; Schneede, Jörn; Hallmans, Göran; Häggström, Jenny; Johansson, Ingegerd; Van Guelpen, Bethany; Palmqvist, Richard

    2018-05-22

    One-carbon metabolism biomarkers are easily measured in plasma, but analyzing them one at a time in relation to disease does not take into account the interdependence of the many factors involved. The relative dynamics of major one-carbon metabolism branches can be assessed by relating the functional B-vitamin marker total homocysteine (tHcy) to transsulfuration (total cysteine) and methylation (creatinine) outputs. We validated the ratios of tHcy to total cysteine (Hcy:Cys), tHcy to creatinine (Hcy:Cre), and tHcy to cysteine to creatinine (Hcy:Cys:Cre) as functional markers of B-vitamin status. We also calculated the associations of these ratios to colorectal cancer (CRC) risk. Furthermore, the relative contribution of potential confounders to the variance of the ratio-based B-vitamin markers was calculated by linear regression in a nested case-control study of 613 CRC cases and 1190 matched controls. Total B-vitamin status was represented by a summary score comprising Z-standardized plasma concentrations of folate, cobalamin, betaine, pyridoxal 5'-phosphate, and riboflavin. Associations with CRC risk were estimated using conditional logistic regression. We found that the ratio-based B-vitamin markers all outperformed tHcy as markers of total B-vitamin status, in both CRC cases and controls. Additionally, associations with CRC risk were similar for the ratio-based B-vitamin markers and total B-vitamin status (approximately 25% lower risk for high versus low B-vitamin status). In conclusion, ratio-based B-vitamin markers were good predictors of total B-vitamin status and displayed similar associations as total B-vitamin status with CRC risk. Since tHcy and creatinine are routinely clinically analyzed, Hcy:Cre could be easily implemented in clinical practice. This article is protected by copyright. All rights reserved. © 2018 UICC.

  7. Prevalence and Incidence of Systemic Lupus Erythematosus in a Population-Based Registry of American Indian and Alaska Native People, 2007–2009

    PubMed Central

    Ferucci, Elizabeth D.; Johnston, Janet M.; Gaddy, Jasmine R.; Sumner, Lisa; Posever, James O.; Choromanski, Tammy L.; Gordon, Caroline; Lim, S. Sam; Helmick, Charles G.

    2015-01-01

    Objective Few studies have investigated the epidemiology of systemic lupus erythematosus (SLE) in American Indian and Alaska Native populations. The objective of this study was to determine the prevalence and incidence of SLE in the Indian Health Service (IHS) active clinical population in 3 regions of the US. Methods For this population-based registry within the IHS, the denominator consisted of individuals in the IHS active clinical population in 2007, 2008, and/or 2009 and residing in a community in 1 of 3 specified regions. Potential SLE cases were identified based on the presence of a diagnostic code for SLE or related disorder in the IHS National Data Warehouse. Detailed medical record abstraction was performed for each potential case. The primary case definition was documentation in the medical record of ≥4 of the revised American College of Rheumatology criteria for the classification of SLE. Prevalence was calculated for 2007, and the mean annual incidence was calculated for the years 2007 through 2009. Results The age-adjusted prevalence and incidence of SLE according to the primary definition were 178 per 100,000 person-years (95% confidence interval [95% CI] 157–200) and 7.4 per 100,000 person-years (95% CI 5.1–10.4). Among women, the age-adjusted prevalence was 271, and the age-adjusted incidence was 10.4. The prevalence was highest in women ages 50–59 years and in the Phoenix Area IHS. Conclusion The first population-based lupus registry in the US American Indian and Alaska Native population has demonstrated that the prevalence and incidence of SLE are high. Our estimates are as high as or higher than the rates reported in the US black population. PMID:24891315

  8. Selection bias in population-based cancer case-control studies due to incomplete sampling frame coverage.

    PubMed

    Walsh, Matthew C; Trentham-Dietz, Amy; Gangnon, Ronald E; Nieto, F Javier; Newcomb, Polly A; Palta, Mari

    2012-06-01

    Increasing numbers of individuals are choosing to opt out of population-based sampling frames due to privacy concerns. This is especially a problem in the selection of controls for case-control studies, as the cases often arise from relatively complete population-based registries, whereas control selection requires a sampling frame. If opt out is also related to risk factors, bias can arise. We linked breast cancer cases who reported having a valid driver's license from the 2004-2008 Wisconsin women's health study (N = 2,988) with a master list of licensed drivers from the Wisconsin Department of Transportation (WDOT). This master list excludes Wisconsin drivers that requested their information not be sold by the state. Multivariate-adjusted selection probability ratios (SPR) were calculated to estimate potential bias when using this driver's license sampling frame to select controls. A total of 962 cases (32%) had opted out of the WDOT sampling frame. Cases age <40 (SPR = 0.90), income either unreported (SPR = 0.89) or greater than $50,000 (SPR = 0.94), lower parity (SPR = 0.96 per one-child decrease), and hormone use (SPR = 0.93) were significantly less likely to be covered by the WDOT sampling frame (α = 0.05 level). Our results indicate the potential for selection bias due to differential opt out between various demographic and behavioral subgroups of controls. As selection bias may differ by exposure and study base, the assessment of potential bias needs to be ongoing. SPRs can be used to predict the direction of bias when cases and controls stem from different sampling frames in population-based case-control studies.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  10. Ab Initio Calculations Applied to Problems in Metal Ion Chemistry

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Partridge, Harry; Arnold, James O. (Technical Monitor)

    1994-01-01

    Electronic structure calculations can provide accurate spectroscopic data (such as molecular structures) vibrational frequencies, binding energies, etc.) that have been very useful in explaining trends in experimental data and in identifying incorrect experimental measurements. In addition, ab initio calculations. have given considerable insight into the many interactions that make the chemistry of transition metal systems so diverse. In this review we focus on cases where calculations and experiment have been used to solve interesting chemical problems involving metal ions. The examples include cases where theory was used to differentiate between disparate experimental values and cases where theory was used to explain unexpected experimental results.

  11. Theory of Auger core-valence-valence processes in simple metals. II. Dynamical and surface effects on Auger line shapes

    NASA Astrophysics Data System (ADS)

    Almbladh, C.-O.; Morales, A. L.

    1989-02-01

    Auger CVV spectra of simple metals are generally believed to be well described by one-electron-like theories in the bulk which account for matrix elements and, in some cases, also static core-hole screening effects. We present here detailed calculations on Li, Be, Na, Mg, and Al using self-consistent bulk wave functions and proper matrix elements. The resulting spectra differ markedly from experiment and peak at too low energies. To explain this discrepancy we investigate effects of the surface and dynamical effects of the sudden disappearance of the core hole in the final state. To study core-hole effects we solve Mahan-Nozières-De Dominicis (MND) model numerically over the entire band. The core-hole potential and other parameters in the MND model are determined by self-consistent calculations of the core-hole impurity. The results are compared with simpler approximations based on the final-state rule due to von Barth and Grossmann. To study surface and mean-free-path effects we perform slab calculations for Al but use a simpler infinite-barrier model in the remaining cases. The model reproduces the slab spectra for Al with very good accuracy. In all cases investigated either the effects of the surface or the effects of the core hole give important modifications and a much improved agreement with experiment.

  12. Extrapolating the Trends of Test Drop Data with Opening Shock Factor Calculations: the Case of the Orion Main and Drogue Parachutes Inflating to 1st Reefed Stage

    NASA Technical Reports Server (NTRS)

    Potvin, Jean; Ray, Eric

    2017-01-01

    We describe a new calculation of the opening shock factor C (sub k) characterizing the inflation performance of NASA's Orion spacecraft main and drogue parachutes opening under a reefing constraint (1st stage reefing), as currently tested in the Capsule Parachute Assembly System (CPAS) program. This calculation is based on an application of the Momentum-Impulse Theorem at low mass ratio (R (sub m) is less than 10 (sup -1)) and on an earlier analysis of the opening performance of drogues decelerating point masses and inflating along horizontal trajectories. Herein we extend the reach of the Theorem to include the effects of payload drag and gravitational impulse during near-vertical motion - both important pre-requisites for CPAS parachute analysis. The result is a family of C (sub k) versus R (sub m) curves which can be used for extrapolating beyond the drop-tested envelope. The paper proves this claim in the case of the CPAS Mains and Drogues opening while trailing either a Parachute Compartment Drop Test Vehicle or a Parachute Test Vehicle (an Orion capsule boiler plate). It is seen that in all cases the values of the opening shock factor can be extrapolated over a range in mass ratio that is at least twice that of the test drop data.

  13. The harmonic force field of benzene. A local density functional study

    NASA Astrophysics Data System (ADS)

    Bérces, Attila; Ziegler, Tom

    1993-03-01

    The harmonic force field of benzene has been calculated by a method based on local density functional theory (LDF). The calculations were carried out employing a triple zeta basis set with triple polarization on hydrogen and double polarization on carbon. The LDF force field was compared to the empirical field due to Ozkabak, Goodman, and Thakur [A. G. Ozkabak, L. Goodman, and S. N. Thakur, J. Phys. Chem. 95, 9044 (1991)], which has served as a benchmark for theoretical calculations as well as the theoretical field based on scaled Hartree-Fock ab initio calculation due to Pulay, Fogarasi, and Boggs [P. Pulay, G. Fogarasi, and J. E. Boggs, J. Chem. Phys. 74, 3999 (1981)]. The calculated LDF force field is in excellent qualitative and very good quantitative agreement with the theoretical field proposed by Pulay, Fogarasi, and Boggs as well as the empirical field due to Ozkabak, Goodman, and Thakur. The LDF field is closest to the values of Pulay and co-workers in those cases where the force constants due to Pulay, Fogarasi, and Boggs and to Ozkabak, Goodman, and Thakur differ in sign or magnitude. The accuracy of the LDF force field was investigated by evaluating a number of eigenvalue and eigenfunction dependent quantities from the the LDF force constants. The quantities under investigation include vibrational frequencies of seven isotopomers, isotopic shifts, as well as absorption intensities. The calculations were performed at both theoretical optimized and approximate equilibrium reference geometries. The predicted frequencies are usually within 1%-2% compared to the empirical harmonic frequencies. The least accurate frequency deviates by 5% from the experimental value. The average deviations from the empirical harmonic frequencies of C6H6 and C6D6 are 16.7 cm-1 (1.5%) and 15.2 cm-1 (1.7%), respectively, not including CH stretching frequencies, in the case where a theoretical reference geometry was used. The accuracy of the out-of-plane force field is especially remarkable; the average deviations for the C6H6 and C6D6 frequencies, based on the LDF force field, are 9.4 cm-1 (1.2%) and 7.3 cm-1 (1.2%), respectively. The absorption intensities were not predicted as accurately as it was expected based on the size of the basis set applied. An analysis is provided to ensure that the force constants are not significantly affected by numerical errors due to the numerical integration scheme employed.

  14. [Process-oriented cost calculation in interventional radiology. A case study].

    PubMed

    Mahnken, A H; Bruners, P; Günther, R W; Rasche, C

    2012-01-01

    Currently used costing methods such as cost centre accounting do not sufficiently reflect the process-based resource utilization in medicine. The goal of this study was to establish a process-oriented cost assessment of percutaneous radiofrequency (RF) ablation of liver and lung metastases. In each of 15 patients a detailed task analysis of the primary process of hepatic and pulmonary RF ablation was performed. Based on these data a dedicated cost calculation model was developed for each primary process. The costs of each process were computed and compared with the revenue for in-patients according to the German diagnosis-related groups (DRG) system 2010. The RF ablation of liver metastases in patients without relevant comorbidities and a low patient complexity level results in a loss of EUR 588.44, whereas the treatment of patients with a higher complexity level yields an acceptable profit. The treatment of pulmonary metastases is profitable even in cases of additional expenses due to complications. Process-oriented costing provides relevant information that is needed for understanding the economic impact of treatment decisions. It is well suited as a starting point for economically driven process optimization and reengineering. Under the terms of the German DRG 2010 system percutaneous RF ablation of lung metastases is economically reasonable, while RF ablation of liver metastases in cases of low patient complexity levels does not cover the costs.

  15. Transport Test Problems for Hybrid Methods Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  16. Backscattered electron simulations to evaluate sensitivity against electron dosage of buried semiconductor features

    NASA Astrophysics Data System (ADS)

    Mukhtar, Maseeh; Thiel, Bradley

    2018-03-01

    In fabrication, overlay measurements of semiconductor device patterns have conventionally been performed using optical methods. Beginning with image-based techniques using box-in-box to the more recent diffraction-based overlay (DBO). Alternatively, use of SEM overlay is under consideration for in-device overlay. Two main application spaces are measurement features from multiple mask levels on the same surface and buried features. Modern CD-SEMs are adept at measuring overlay for cases where all features are on the surface. In order to measure overlay of buried features, HV-SEM is needed. Gate-to-fin and BEOL overlay are important use cases for this technique. A JMONSEL simulation exercise was performed for these two cases using 10 nm line/space gratings of graduated increase in depth of burial. Backscattered energy loss results of these simulations were used to calculate the sensitivity measurements of buried features versus electron dosage for an array of electron beam voltages.

  17. Coupled incompressible Smoothed Particle Hydrodynamics model for continuum-based modelling sediment transport

    NASA Astrophysics Data System (ADS)

    Pahar, Gourabananda; Dhar, Anirban

    2017-04-01

    A coupled solenoidal Incompressible Smoothed Particle Hydrodynamics (ISPH) model is presented for simulation of sediment displacement in erodible bed. The coupled framework consists of two separate incompressible modules: (a) granular module, (b) fluid module. The granular module considers a friction based rheology model to calculate deviatoric stress components from pressure. The module is validated for Bagnold flow profile and two standardized test cases of sediment avalanching. The fluid module resolves fluid flow inside and outside porous domain. An interaction force pair containing fluid pressure, viscous term and drag force acts as a bridge between two different flow modules. The coupled model is validated against three dambreak flow cases with different initial conditions of movable bed. The simulated results are in good agreement with experimental data. A demonstrative case considering effect of granular column failure under full/partial submergence highlights the capability of the coupled model for application in generalized scenario.

  18. Development of MY-DRG casemix pharmacy service weights in UKM Medical Centre in Malaysia.

    PubMed

    Ali Jadoo, Saad Ahmed; Aljunid, Syed Mohamed; Nur, Amrizal Muhammad; Ahmed, Zafar; Van Dort, Dexter

    2015-02-10

    The service weight is among several issues and challenges in the implementation of case-mix in developing countries, including Malaysia. The aim of this study is to develop the Malaysian Diagnosis Related Group (MY-DRG) case-mix pharmacy service weight in University Kebangsaan Malaysia-Medical Center (UKMMC) by identifying the actual cost of pharmacy services by MY-DRG groups in the hospital. All patients admitted to UKMMC in 2011 were recruited in this study. Combination of Step-down and Bottom-up costing methodology has been used in this study. The drug and supplies cost; the cost of staff; the overhead cost; and the equipment cost make up the four components of pharmacy. Direct costing approach has been employed to calculate Drugs and supplies cost from electronic-prescription system; and the inpatient pharmacy staff cost, while the overhead cost and the pharmacy equipments cost have been calculated indirectly from MY-DRG data base. The total pharmacy cost was obtained by summing the four pharmacy components' cost per each MY-DRG. The Pharmacy service weight of a MY-DRG was estimated by dividing the average pharmacy cost of the investigated MY-DRG on the average of a specified MY-DRG (which usually the average pharmacy cost of all MY-DRGs). Drugs and supplies were the main component (86.0%) of pharmacy cost compared o overhead cost centers (7.3%), staff cost (6.5%) and pharmacy equipments (0.2%) respectively. Out of 789 inpatient MY-DRGs case-mix groups, 450 (57.0%) groups were utilized by the UKMMC. Pharmacy service weight has been calculated for each of these 450 MY-DRGs groups. MY-DRG case-mix group of Lymphoma & Chronic Leukemia group with severity level three (C-4-11-III) has the highest pharmacy service weight of 11.8 equivalents to average pharmacy cost of RM 5383.90. While the MY-DRG case-mix group for Circumcision with severity level one (V-1-15-I) has the lowest pharmacy service weight of 0.04 equivalents to average pharmacy cost of RM 17.83. A mixed approach which is based partly on top-down and partly on bottom up costing methodology has been recruited to develop MY-DRG case-mix pharmacy service weight for 450 groups utilized by the UKMMC in 2011.

  19. How we estimate GFR--a pitfall of using a serum creatinine-based formula.

    PubMed

    Refaie, R; Moochhala, S H; Kanagasundaram, N S

    2007-10-01

    Chronic kidney disease (CKD) is defined using the estimated glomerular filtration rate (eGFR). This has led to a large increase in the diagnosis of CKD in the United Kingdom, the majority of which is in its earlier stages and is detected in non-hospital settings. It is important to be aware that eGFR calculations will reflect inaccuracies in the measured serum creatinine, as the latter is an important component of the calculation. We report a case in which a patient with high muscle-mass who had consumed large quantities of a creatine-containing nutritional supplement presented with apparently reduced renal function on the basis of the serum creatinine and therefore also the eGFR calculation (MDRD equation). Creatine is an amino acid which is a precursor of creatinine, and is known to transiently increase serum creatinine. 6 weeks after discontinuing creatine ingestion, serum creatinine had fallen but still gave rise to an apparently abnormal calculated eGFR. In fact, renal function was shown to be normal when estimated using 24-hour urinary creatinine clearance. This case demonstrates that the upper extreme of muscle mass and ingestion of creatine can affect not only serum creatinine but also the calculated eGFR. Knowledge of common confounding factors and their effects on serum creatinine and eGFR will allow appreciation of the limitations of these measures of renal function, and can prevent unnecessary over-investigation of such patients.

  20. A Method to Predict the Structure and Stability of RNA/RNA Complexes.

    PubMed

    Xu, Xiaojun; Chen, Shi-Jie

    2016-01-01

    RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.

  1. Simulation Study on Understanding the Spin Transport in MgO Adsorbed Graphene Based Magnetic Tunnel Junction

    NASA Astrophysics Data System (ADS)

    Raturi, Ashish; Choudhary, Sudhanshu

    2016-11-01

    First principles calculations of spin-dependent electronic transport properties of magnetic tunnel junction (MTJ) consisting of MgO adsorbed graphene nanosheet sandwiched between two CrO2 half-metallic ferromagnetic (HMF) electrodes is reported. MgO adsorption on graphene opens bandgap in graphene nanosheet which makes it more suitable for use as a tunnel barrier in MTJs. It was found that MgO adsorption suppresses transmission probabilities for spin-down channel in case of parallel configuration (PC) and also suppresses transmission in antiparallel configuration (APC) for both spin-up and spin-down channel. Tunnel magneto-resistance (TMR) of 100% is obtained at all bias voltages in MgO adsorbed graphene-based MTJ which is higher than that reported in pristine graphene-based MTJ. HMF electrodes were found suitable to achieve perfect spin filtration effect and high TMR. I-V characteristics for both parallel and antiparallel magnetization states of junction are calculated. High TMR suggests its usefulness in spin valves and other spintronics-based applications.

  2. SU-F-T-409: Modelling of the Magnetic Port in Temporary Breast Tissue Expanders for a Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, J; Heins, D; Zhang, R

    Purpose: To model the magnetic port in the temporary breast tissue expanders and to improve accuracy of dose calculation in Pinnacle, a commercial treatment planning system (TPS). Methods: A magnetic port in the tissue expander was modeled with a radiological measurement-basis; we have determined the dimension and the density of the model by film images and ion chamber measurement under the magnetic port, respectively. The model was then evaluated for various field sizes and photon energies by comparing depth dose values calculated by TPS (using our new model) and ion chamber measurement in a water tank. Also, the model wasmore » further evaluated by using a simplified anthropomorphic phantom with realistic geometry by placing thermoluminescent dosimeters (TLD)s around the magnetic port. Dose perturbations in a real patient’s treatment plan from the new model and a current clinical model, which is based on the subjective contouring created by the dosimetrist, were also compared. Results: Dose calculations based on our model showed less than 1% difference from ion chamber measurements for various field sizes and energies under the magnetic port when the magnetic port was placed parallel to the phantom surface. When it was placed perpendicular to the phantom surface, the maximum difference was 3.5%, while average differences were less than 3.1% for all cases. For the simplified anthropomorphic phantom, the calculated point doses agreed with TLD measurements within 5.2%. By comparing with the current model which is being used in clinic by TPS, it was found that current clinical model overestimates the effect from the magnetic port. Conclusion: Our new model showed good agreement with measurement for all cases. It could potentially improve the accuracy of dose delivery to the breast cancer patients.« less

  3. [Retrospective calculation of the workload in emergency departments in case of a mass accident. An analysis of the Love Parade 2010].

    PubMed

    Ackermann, O; Heigel, U; Lazic, D; Vogel, T; Schofer, M D; Rülander, C

    2012-04-01

    For the clinical planning of mass events the emergency departments are of critical importance, but there are still no data available for the workload in these cases. As this is essential for an effective medical preparation, we calculated the workload based on the ICD codes of the vicitims at the Loveparade 2010 in Duisburg. Based on the patient data of the Loveparade 2010 we used a filter diagnosis to estimate the number of shock room patients, regular admittances, surgical wound treatments, applications of casts or splints, and diagnosis of drug abuse. In addition every patient was classified to a Manchester Triage System category. This resulted in a chronological and quantitative work-load profile of the emergency department, which was evaluated by the clinical experiences of the departmental medical staff. The workload profile as a whole displayed a realistic image of the real true situation on July 24, 2010. While only the number, diagnosis and chronology of medical surgical patients was realistic, the MTS classification was not. The emergency department had a maximum of 6 emergency room admittances, 6 regular admittances, 4-5 surgical wound treatments, 3 casts and 2 drug abuse patients per hour. The calculation of workload from the ICD data is a reasonable tool for retrospective estimation of the workload of an emergency department, the data can be used for future planning. The retrospective MTS grouping is at present not suitable for a realistic calculation. Retrospective measures in the MTS groups are at present not sufficiently suitable for valid data publication. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Optimal trace inequality constants for interior penalty discontinuous Galerkin discretisations of elliptic operators using arbitrary elements with non-constant Jacobians

    NASA Astrophysics Data System (ADS)

    Owens, A. R.; Kópházi, J.; Eaton, M. D.

    2017-12-01

    In this paper, a new method to numerically calculate the trace inequality constants, which arise in the calculation of penalty parameters for interior penalty discretisations of elliptic operators, is presented. These constants are provably optimal for the inequality of interest. As their calculation is based on the solution of a generalised eigenvalue problem involving the volumetric and face stiffness matrices, the method is applicable to any element type for which these matrices can be calculated, including standard finite elements and the non-uniform rational B-splines of isogeometric analysis. In particular, the presented method does not require the Jacobian of the element to be constant, and so can be applied to a much wider variety of element shapes than are currently available in the literature. Numerical results are presented for a variety of finite element and isogeometric cases. When the Jacobian is constant, it is demonstrated that the new method produces lower penalty parameters than existing methods in the literature in all cases, which translates directly into savings in the solution time of the resulting linear system. When the Jacobian is not constant, it is shown that the naive application of existing approaches can result in penalty parameters that do not guarantee coercivity of the bilinear form, and by extension, the stability of the solution. The method of manufactured solutions is applied to a model reaction-diffusion equation with a range of parameters, and it is found that using penalty parameters based on the new trace inequality constants result in better conditioned linear systems, which can be solved approximately 11% faster than those produced by the methods from the literature.

  5. An accurate model for the computation of the dose of protons in water.

    PubMed

    Embriaco, A; Bellinzona, V E; Fontana, A; Rotondi, A

    2017-06-01

    The accurate and fast calculation of the dose in proton radiation therapy is an essential ingredient for successful treatments. We propose a novel approach with a minimal number of parameters. The approach is based on the exact calculation of the electromagnetic part of the interaction, namely the Molière theory of the multiple Coulomb scattering for the transversal 1D projection and the Bethe-Bloch formula for the longitudinal stopping power profile, including a gaussian energy straggling. To this e.m. contribution the nuclear proton-nucleus interaction is added with a simple two-parameter model. Then, the non gaussian lateral profile is used to calculate the radial dose distribution with a method that assumes the cylindrical symmetry of the distribution. The results, obtained with a fast C++ based computational code called MONET (MOdel of ioN dosE for Therapy), are in very good agreement with the FLUKA MC code, within a few percent in the worst case. This study provides a new tool for fast dose calculation or verification, possibly for clinical use. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    PubMed

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  7. Applying ISO 11929:2010 Standard to detection limit calculation in least-squares based multi-nuclide gamma-ray spectrum evaluation

    NASA Astrophysics Data System (ADS)

    Kanisch, G.

    2017-05-01

    The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.

  8. Main chemical species and molecular structure of deep eutectic solvent studied by experiments with DFT calculation: a case of choline chloride and magnesium chloride hexahydrate.

    PubMed

    Zhang, Chao; Jia, Yongzhong; Jing, Yan; Wang, Huaiyou; Hong, Kai

    2014-08-01

    The infrared spectrum of deep eutectic solvent of choline chloride and magnesium chloride hexahydrate was measured by the FTIR spectroscopy and analyzed with the aid of DFT calculations. The main chemical species and molecular structure in deep eutectic solvent of [MgClm(H2O)6-m]2-m and [ChxCly]x+y complexes were mainly identified and the active ion of magnesium complex during the electrochemical process was obtained. The mechanism of the electrochemical process of deep eutectic solvent of choline chloride and magnesium chloride hexahydrate was well explained by combination theoretical calculations and experimental. Besides, based on our results we proposed a new system for the dehydration study of magnesium chloride hexahydrate.

  9. Abundances of volatile-bearing phases in carbonaceous chondrites and cooling rates of meteorites based on cation ordering of orthopyroxenes

    NASA Technical Reports Server (NTRS)

    Ganguly, Jibamitra

    1989-01-01

    Results of preliminary calculations of volatile abundances in carbonaceous chondrites are discussed. The method (Ganguly 1982) was refined for the calculation of cooling rate on the basis of cation ordering in orthopyroxenes, and it was applied to the derivation of cooling rates of some stony meteorites. Evaluation of cooling rate is important to the analysis of condensation, accretion, and post-accretionary metamorphic histories of meteorites. The method of orthopyroxene speedometry is widely applicable to meteorites and would be very useful in the understanding of the evolutionary histories of carbonaceous chondrites, especially since the conventional metallographic and fission track methods yield widely different results in many cases. Abstracts are given which summarize the major conclusions of the volatile abundance and cooling rate calculations.

  10. Evaluation of antioxidant activity and electronic structure of aspirin and paracetamol

    NASA Astrophysics Data System (ADS)

    Motozaki, W.; Nagatani, Y.; Kimura, Y.; Endo, K.; Takemura, T.; Kurmaev, E. Z.; Moewes, A.

    2011-01-01

    We present a study of electronic structure, chemical bonding, and antioxidant activity of phenolic antioxidants (aspirin and paracetamol). X-ray photoelectron and emission spectra of the antioxidants have been simulated by deMon density functional theory (DFT) calculations of the molecules. The chemical bonding of aspirin is characterized by the formation of oxygen 'lone-pair' π-orbitals which can neutralize free radicals and thus be related to antioxidant properties of the drug. In the case of paracetamol the additional nitrogen 'lone pair' is formed which can explain toxicity of the drug. We propose an evaluation method of antioxidant activity based on the relationship between experimental half-wave oxidation potential ( Ep/2 ) and calculated ionization potentials ( IP) by the DFT calculations, and can conclude that paracetamol has the higher antioxidant activity than aspirin.

  11. Surveillance of traumatic firefighter fatalities: an assessment of four systems.

    PubMed

    Estes, Chris R; Marsh, Suzanne M; Castillo, Dawn N

    2011-01-01

    Firefighters regularly respond to hazardous situations that put them at risk for fatal occupational injuries. Traumatic occupational fatality surveillance is a foundation for understanding the problem and developing prevention strategies. We assessed four surveillance systems for their utility in characterizing firefighter fatalities and informing prevention measures. We examined three population-based systems (the Bureau of Labor Statistics' Census of Fatal Occupational Injuries and systems maintained by the United States Fire Administration and the National Fire Protection Association) and one case-based system (data collected through the National Institute for Occupational Safety and Health Fire Fighter Fatality Investigation and Prevention Program). From each system, we selected traumatic fatalities among firefighters for 2003-2006. Then we compared case definitions, methods for case ascertainment, variables collected, and rate calculation methods. Overall magnitude of fatalities differed among systems. The population-based systems were effective in characterizing the circumstances of traumatic firefighter fatalities. The case-based surveillance system was effective in formulating detailed prevention recommendations, which could not be made based on the population-based data alone. Methods for estimating risk were disparate and limited fatality rate comparisons between firefighters and other workers. The systems included in this study contribute toward a greater understanding of firefighter fatalities. Areas of improvement for these systems should continue to be identified as they are used to direct research and prevention efforts.

  12. Development of novel optical fiber sensors for measuring tilts and displacements of geotechnical structures

    NASA Astrophysics Data System (ADS)

    Pei, Hua-Fu; Yin, Jian-Hua; Jin, Wei

    2013-09-01

    Two kinds of innovative sensors based on optical fiber sensing technologies have been proposed and developed for measuring tilts and displacements in geotechnical structures. The newly developed tilt sensors are based on classical beam theory and were successfully used to measure the inclinations in a physical model test. The conventional inclinometers including in-place and portable types, as a key instrument, are very commonly used in geotechnical engineering. In this paper, fiber Bragg grating sensing technology is used to measure strains along a standard inclinometer casing and these strains are used to calculate the lateral and/or horizontal deflections of the casing using the beam theory and a finite difference method. Finally, the monitoring results are verified by laboratory tests.

  13. A systematic review of occupational safety and health business cases.

    PubMed

    Verbeek, Jos; Pulliainen, Marjo; Kankaanpää, Eila

    2009-12-01

    Business cases are commonly developed as means to rationalize investment. We systematically reviewed 26 reported cases on occupational safety and health (OSH) interventions to assess if health and productivity arguments make a good business case. To be included in the review, studies had to analyze the costs and benefits, including productivity, of an OSH intervention at the enterprise level. We searched Medline and Embase for studies and used Google search in addition. Two reviewers independently selected studies and extracted data. The intervention profitability was calculated in euros (euro in 2008) as the first year's benefits minus the total intervention costs per worker. The payback period was calculated as the intervention costs divided by the first year's benefits. We found three ex-ante and 23 ex-post cases. In 20 cases, the study design was a before-after comparison without a control group. Generally a 100% reduction of injuries or sickness absence was assumed. In two cases, productivity and quality increases were very large. The main benefit was avoided sick leave. Depreciation or discounting was applied only in a minority of cases. The intervention profitability was negative in seven studies, up to euro 500 per employee in 12 studies and more than euro 500 per employee in seven studies. The payback period was less than half a year for 19 studies. Only a few studies included sensitivity analyses. Few ex-ante business cases for management decisions on OSH are reported. Guidelines for reporting and evaluation are needed. Business cases need more sound assumptions on the effectiveness of interventions and should incorporate greater uncertainty into their design. Ex-post evaluation should be based preferably on study designs that control for trends at a time different from that of the intervention.

  14. 42 CFR 484.220 - Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... address changes to the case-mix that are a result of changes in the coding or classification of different...-day episode payment rate for case-mix and area wage levels. 484.220 Section 484.220 Public Health... Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and area wage levels...

  15. Low-frequency quadrupole impedance of undulators and wigglers

    DOE PAGES

    Blednykh, A.; Bassi, G.; Hidaka, Y.; ...

    2016-10-25

    An analytical expression of the low-frequency quadrupole impedance for undulators and wigglers is derived and benchmarked against beam-based impedance measurements done at the 3 GeV NSLS-II storage ring. The adopted theoretical model, valid for an arbitrary number of electromagnetic layers with parallel geometry, allows to calculate the quadrupole impedance for arbitrary values of the magnetic permeability μ r. Here, in the comparison of the analytical results with the measurements for variable magnet gaps, two limit cases of the permeability have been studied: the case of perfect magnets (μ r → ∞), and the case in which the magnets are fullymore » saturated (μ r = 1).« less

  16. Self-homodyne free-space optical communication system based on orthogonally polarized binary phase shift keying.

    PubMed

    Cai, Guangyu; Sun, Jianfeng; Li, Guangyuan; Zhang, Guo; Xu, Mengmeng; Zhang, Bo; Yue, Chaolei; Liu, Liren

    2016-06-10

    A self-homodyne laser communication system based on orthogonally polarized binary phase shift keying is demonstrated. The working principles of this method and the structure of a transceiver are described using theoretical calculations. Moreover, the signal-to-noise ratio, sensitivity, and bit error rate are analyzed for the amplifier-noise-limited case. The reported experiment validates the feasibility of the proposed method and demonstrates its advantageous sensitivity as a self-homodyne communication system.

  17. [Comparison between two different Disability Weights calculations: the case of occupational injuries].

    PubMed

    Levi, Miriam; Ariani, Filippo; Baldasseroni, Alberto

    2011-01-01

    To introduce the concept of DALYs (Disability Adjusted Life Years), in order to calculate the burden of occupational injuries and to compare the disability weights methodology applied by the National Institute for Insurance against Accidents at Work (INAIL) to occupational injuries, with respect to the methodology adopted by the World Health Organization in the Global Burden of Disease Study (GBD), in order to facilitate, on a regional-national basis, the future application of estimates of Burden of Disease due to this phenomenon, based on data available from the NHS. In the first part of the present study, a comparison between the theoretical GBD methodology, based on Disability Weights, and the INAIL methodology based on Gradi di inabilità (Degree of Disability) (GI) described in the table of impairments is made, using data on occupational injuries occurred in Tuscany from 2001 to 2008. Given the different criteria adopted by WHO and INAIL for the classification of injuries sequelae, in the second part, two equations described in the literature have been applied in order to correct systematic biases. In the INAIL dataset, all types of injuries, though often small in scale, have cases with permanent consequences, some of them serious.This contrasts with the assumptions of the WHO, that, apart from the cases of amputation, reduces the possibility of lifelong disabilities to a few very serious categories. In the case of femur and skull fractures, the proportion of lifelong cases is considered by WHO similar to the proportion that in the INAIL dataset is achieved after narrowing the threshold of permanent damage to cases with GI ≥ 33. In the case of amputations and spinal cord injuries, for which the WHO assumes a priori that all cases have lifelong consequences, on the contrary, the greater similarity between the assumptions and the empirically observable reality is obtained after extending the threshold of permanent damage to all cases with even minimal sequelae.The comparison between the WHO DW and INAIL GI, possible only in relation to injuries resulting in permanent damage, shows that in case of injuries of greater severity, INAIL GI are generally lower than the WHO DW. In the case of less serious injuries, INAIL gives instead higher values. The length of temporary disabilities recorded by INAIL is systematically higher than that estimated by WHO. These initial comparisons between the WHO methodology and the cases evaluation performed by INAIL show that the Italian system, based on the gathering of all relevant aspects related to each case, has the potential to utilize and synthesize a greater amount of information.However, wide limits of uncertainty still remain and further empirical findings are needed in order to compare the two systems in terms of precise determination of the DW, the length of disabilities and variations of mortality related to injuries.

  18. Ionization energies of aqueous nucleic acids: photoelectron spectroscopy of pyrimidine nucleosides and ab initio calculations.

    PubMed

    Slavícek, Petr; Winter, Bernd; Faubel, Manfred; Bradforth, Stephen E; Jungwirth, Pavel

    2009-05-13

    Vertical ionization energies of the nucleosides cytidine and deoxythymidine in water, the lowest ones amounting in both cases to 8.3 eV, are obtained from photoelectron spectroscopy measurements in aqueous microjets. Ab initio calculations employing a nonequilibrium polarizable continuum model quantitatively reproduce the experimental spectra and provide molecular interpretation of the individual peaks of the photoelectron spectrum, showing also that lowest ionization originates from the base. Comparison of calculated vertical ionization potentials of pyrimidine bases, nucleosides, and nucleotides in water and in the gas phase underlines the dramatic effect of bulk hydration on the electronic structure. In the gas phase, the presence of sugar and, in particular, of phosphate has a strong effect on the energetics of ionization of the base. Upon bulk hydration, the ionization potential of the base in contrast becomes rather insensitive to the presence of the sugar and phosphate, which indicates a remarkable screening ability of the aqueous solvent. Accurate aqueous-phase vertical ionization potentials provide a significant improvement to the corrected gas-phase values used in the literature and represent important information in assessing the threshold energies for photooxidation and oxidation free energies of solvent-exposed DNA components. Likewise, such energetic data should allow improved assessment of delocalization and charge-hopping mechanisms in DNA ionized by radiation.

  19. Case based measles surveillance in Pune: evidence to guide current and future measles control and elimination efforts in India.

    PubMed

    Bose, Anindya Sekhar; Jafari, Hamid; Sosler, Stephen; Narula, Arvinder Pal Singh; Kulkarni, V M; Ramamurty, Nalini; Oommen, John; Jadi, Ramesh S; Banpel, R V; Henao-Restrepo, Ana Maria

    2014-01-01

    According to WHO estimates, 35% of global measles deaths in 2011 occurred in India. In 2013, India committed to a goal of measles elimination by 2020. Laboratory supported case based measles surveillance is an essential component of measles elimination strategies. Results from a case-based measles surveillance system in Pune district (November 2009 through December 2011) are reported here with wider implications for measles elimination efforts in India. Standard protocols were followed for case identification, investigation and classification. Suspected measles cases were confirmed through serology (IgM) or epidemiological linkage or clinical presentation. Data regarding age, sex, vaccination status were collected and annualized incidence rates for measles and rubella cases calculated. Of the 1011 suspected measles cases reported to the surveillance system, 76% were confirmed measles, 6% were confirmed rubella, and 17% were non-measles, non-rubella cases. Of the confirmed measles cases, 95% were less than 15 years of age. Annual measles incidence rate was more than 250 per million persons and nearly half were associated with outbreaks. Thirty-nine per cent of the confirmed measles cases were vaccinated with one dose of measles vaccine (MCV1). Surveillance demonstrated high measles incidence and frequent outbreaks in Pune where MCV1 coverage in infants was above 90%. Results indicate that even high coverage with a single dose of measles vaccine was insufficient to provide population protection and prevent measles outbreaks. An effective measles and rubella surveillance system provides essential information to plan, implement and evaluate measles immunization strategies and monitor progress towards measles elimination.

  20. Case Based Measles Surveillance in Pune: Evidence to Guide Current and Future Measles Control and Elimination Efforts in India

    PubMed Central

    Bose, Anindya Sekhar; Jafari, Hamid; Sosler, Stephen; Narula, Arvinder Pal Singh; Kulkarni, V. M.; Ramamurty, Nalini; Oommen, John; Jadi, Ramesh S.; Banpel, R. V.; Henao-Restrepo, Ana Maria

    2014-01-01

    Background According to WHO estimates, 35% of global measles deaths in 2011 occurred in India. In 2013, India committed to a goal of measles elimination by 2020. Laboratory supported case based measles surveillance is an essential component of measles elimination strategies. Results from a case-based measles surveillance system in Pune district (November 2009 through December 2011) are reported here with wider implications for measles elimination efforts in India. Methods Standard protocols were followed for case identification, investigation and classification. Suspected measles cases were confirmed through serology (IgM) or epidemiological linkage or clinical presentation. Data regarding age, sex, vaccination status were collected and annualized incidence rates for measles and rubella cases calculated. Results Of the 1011 suspected measles cases reported to the surveillance system, 76% were confirmed measles, 6% were confirmed rubella, and 17% were non-measles, non-rubella cases. Of the confirmed measles cases, 95% were less than 15 years of age. Annual measles incidence rate was more than 250 per million persons and nearly half were associated with outbreaks. Thirty-nine per cent of the confirmed measles cases were vaccinated with one dose of measles vaccine (MCV1). Conclusion Surveillance demonstrated high measles incidence and frequent outbreaks in Pune where MCV1 coverage in infants was above 90%. Results indicate that even high coverage with a single dose of measles vaccine was insufficient to provide population protection and prevent measles outbreaks. An effective measles and rubella surveillance system provides essential information to plan, implement and evaluate measles immunization strategies and monitor progress towards measles elimination. PMID:25290339

  1. Aerodynamic Analysis Over Double Wedge Airfoil

    NASA Astrophysics Data System (ADS)

    Prasad, U. S.; Ajay, V. S.; Rajat, R. H.; Samanyu, S.

    2017-05-01

    Aeronautical studies are being focused more towards supersonic flights and methods to attain a better and safer flight with highest possible performance. Aerodynamic analysis is part of the whole procedure, which includes focusing on airfoil shapes which will permit sustained flight of aircraft at these speeds. Airfoil shapes differ based on the applications, hence the airfoil shapes considered for supersonic speeds are different from the ones considered for Subsonic. The present work is based on the effects of change in physical parameter for the Double wedge airfoil. Mach number range taken is for transonic and supersonic. Physical parameters considered for the Double wedge case with wedge angle (ranging from 5 degree to 15 degree. Available Computational tools are utilized for analysis. Double wedge airfoil is analysed at different Angles of attack (AOA) based on the wedge angle. Analysis is carried out using fluent at standard conditions with specific heat ratio taken as 1.4. Manual calculations for oblique shock properties are calculated with the help of Microsoft excel. MATLAB is used to form a code for obtaining shock angle with Mach number and wedge angle at the given parameters. Results obtained from manual calculations and fluent analysis are cross checked.

  2. Design method of redundancy of brace-anchor sharing supporting based on cooperative deformation

    NASA Astrophysics Data System (ADS)

    Liu, Jun-yan; Li, Bing; Liu, Yan; Cai, Shan-bing

    2017-11-01

    Because of the complicated environment requirement, the support form of foundation pit is diversified, and the brace-anchor sharing support is widely used. However, the research on the force deformation characteristics and the related aspects of the cooperative response of the brace-anchor sharing support is insufficient. The application of redundancy theory in structural engineering has been more mature, but there is little theoretical research on redundancy theory in underground engineering. Based on the idea of collaborative deformation, the paper calculates the ratio of the redundancy degree of the cooperative deformation by using the local reinforcement design method and the structural component redundancy parameter calculation formula based on Frangopol. Combined with the engineering case, through the calculation of the ratio of cooperative deformation redundancy in the joint of brace-anchor sharing support. This paper explores the optimal anchor distribution form under the condition of cooperative deformation, and through the analysis and research of displacement field and stress field, the results of the collaborative deformation are validated by comparing the field monitoring data. It provides theoretical basis for the design of this kind of foundation pit in the future.

  3. The connection characteristics of flux pinned docking interface

    NASA Astrophysics Data System (ADS)

    Zhang, Mingliang; Han, Yanjun; Guo, Xing; Zhao, Cunbao; Deng, Feiyue

    2017-03-01

    This paper presents the mechanism and potential advantages of flux pinned docking interface mainly composed of a high temperature superconductor and an electromagnet. In order to readily assess the connection characteristics of flux pinned docking interface, the force between a high temperature superconductor and an electromagnet needs to be investigated. Based on the magnetic dipole method and the Ampere law method, the force between two current coils can be compared, which shows that the Ampere law method has the higher calculated accuracy. Based on the improved frozen image model and the Ampere law method, the force between high temperature superconductor bulk and permanent magnet can be calculated, which is validated experimentally. Moreover, the force between high temperature superconductor and electromagnet applied to flux pinned docking interface is able to be predicted and analyzed. The connection stiffness between high temperature superconductor and permanent magnet can be calculated based on the improved frozen image model and Hooke's law. The relationship between the connection stiffness and field cooling height is analyzed. Furthermore, the connection stiffness of the flux pinned docking interface is predicted and optimized, and its effective working range is defined and analyzed in case of some different parameters.

  4. Adjusting case mix payment amounts for inaccurately reported comorbidity data.

    PubMed

    Sutherland, Jason M; Hamm, Jeremy; Hatcher, Jeff

    2010-03-01

    Case mix methods such as diagnosis related groups have become a basis of payment for inpatient hospitalizations in many countries. Specifying cost weight values for case mix system payment has important consequences; recent evidence suggests case mix cost weight inaccuracies influence the supply of some hospital-based services. To begin to address the question of case mix cost weight accuracy, this paper is motivated by the objective of improving the accuracy of cost weight values due to inaccurate or incomplete comorbidity data. The methods are suitable to case mix methods that incorporate disease severity or comorbidity adjustments. The methods are based on the availability of detailed clinical and cost information linked at the patient level and leverage recent results from clinical data audits. A Bayesian framework is used to synthesize clinical data audit information regarding misclassification probabilities into cost weight value calculations. The models are implemented through Markov chain Monte Carlo methods. An example used to demonstrate the methods finds that inaccurate comorbidity data affects cost weight values by biasing cost weight values (and payments) downward. The implications for hospital payments are discussed and the generalizability of the approach is explored.

  5. Sensitivity and Specificity of Histoplasma Antigen Detection by Enzyme Immunoassay.

    PubMed

    Cunningham, Lauren; Cook, Audrey; Hanzlicek, Andrew; Harkin, Kenneth; Wheat, Joseph; Goad, Carla; Kirsch, Emily

    2015-01-01

    The objective of this study was to evaluate the sensitivity and specificity of an antigen enzyme immunoassay (EIA) on urine samples for the diagnosis of histoplasmosis in dogs. This retrospective medical records review included canine cases with urine samples submitted for Histoplasma EIA antigen assay between 2007 and 2011 from three veterinary institutions. Cases for which urine samples were submitted for Histoplasma antigen testing were reviewed and compared to the gold standard of finding Histoplasma organisms or an alternative diagnosis on cytology or histopathology. Sensitivity, specificity, negative predictive value, positive predictive value, and the kappa coefficient and associated confidence interval were calculated for the EIA-based Histoplasma antigen assay. Sixty cases met the inclusion criteria. Seventeen cases were considered true positives based on identification of the organism, and 41 cases were considered true negatives with an alternative definitive diagnosis. Two cases were considered false negatives, and there were no false positives. Sensitivity was 89.47% and the negative predictive value was 95.35%. Specificity and the positive predictive value were both 100%. The kappa coefficient was 0.9207 (95% confidence interval, 0.8131-1). The Histoplasma antigen EIA test demonstrated high specificity and sensitivity for the diagnosis of histoplasmosis in dogs.

  6. Development of Extended Ray-tracing method including diffraction, polarization and wave decay effects

    NASA Astrophysics Data System (ADS)

    Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru

    2017-10-01

    Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.

  7. Task 7: ADPAC User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, E. J.; Topp, D. A.; Delaney, R. A.

    1996-01-01

    The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.

  8. Crankshaft motion in a highly congested bis(triarylmethyl)peroxide.

    PubMed

    Khuong, Tinh-Alfredo V; Zepeda, Gerardo; Sanrame, Carlos N; Dang, Hung; Bartberger, Michael D; Houk, K N; Garcia-Garibay, Miguel A

    2004-11-17

    Crankshaft motion has been proposed in the solid state for molecular fragments consisting of three or more rotors linked by single bonds, whereby the two terminal rotors are static and the internal rotors experience circular motion. Bis-[tri-(3,5-di-tert-butyl)phenylmethyl]-peroxide 2 was tested as a model in search of crankshaft motion at the molecular level. In the case of peroxide 2, the bulky trityl groups may be viewed as the external static rotors, while the two peroxide oxygens can undergo the sought after internal rotation. Evidence for this process in the case of peroxide 2 was obtained from conformational dynamics determined by variable-temperature (13)C and (1)H NMR between 190 and 375 K in toluene-d(8). Detailed spectral assignments for the interpretation of two coalescence processes were based on a correlation between NMR spectra obtained in solution at low temperature, in the solid state by (13)C CPMAS NMR, and by GIAO calculations based on a B3LYP/6-31G structure of 2 obtained from its X-ray coordinates as the input. Evidence supporting crankshaft rotation rather than slippage of the trityl groups was obtained from molecular mechanics calculations.

  9. The MONET code for the evaluation of the dose in hadrontherapy

    NASA Astrophysics Data System (ADS)

    Embriaco, A.

    2018-01-01

    The MONET is a code for the computation of the 3D dose distribution for protons in water. For the lateral profile, MONET is based on the Molière theory of multiple Coulomb scattering. To take into account also the nuclear interactions, we add to this theory a Cauchy-Lorentz function, where the two parameters are obtained by a fit to a FLUKA simulation. We have implemented the Papoulis algorithm for the passage from the projected to a 2D lateral distribution. For the longitudinal profile, we have implemented a new calculation of the energy loss that is in good agreement with simulations. The inclusion of the straggling is based on the convolution of energy loss with a Gaussian function. In order to complete the longitudinal profile, also the nuclear contributions are included using a linear parametrization. The total dose profile is calculated in a 3D mesh by evaluating at each depth the 2D lateral distributions and by scaling them at the value of the energy deposition. We have compared MONET with FLUKA in two cases: a single Gaussian beam and a lateral scan. In both cases, we have obtained a good agreement for different energies of protons in water.

  10. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  11. Modeling and simulation of magnetic resonance imaging based on intermolecular multiple quantum coherences

    NASA Astrophysics Data System (ADS)

    Cai, Congbo; Dong, Jiyang; Cai, Shuhui; Cheng, En; Chen, Zhong

    2006-11-01

    Intermolecular multiple quantum coherences (iMQCs) have many potential applications since they can provide interaction information between different molecules within the range of dipolar correlation distance, and can provide new contrast in magnetic resonance imaging (MRI). Because of the non-localized property of dipolar field, and the non-linear property of the Bloch equations incorporating the dipolar field term, the evolution behavior of iMQC is difficult to deduce strictly in many cases. In such cases, simulation studies are very important. Simulation results can not only give a guide to optimize experimental conditions, but also help analyze unexpected experimental results. Based on our product operator matrix and the K-space method for dipolar field calculation, the MRI simulation software was constructed, running on Windows operation system. The non-linear Bloch equations are calculated by a fifth-order Cash-Karp Runge-Kutta formulism. Computational time can be efficiently reduced by separating the effects of chemical shifts and strong gradient field. Using this software, simulation of different kinds of complex MRI sequences can be done conveniently and quickly on general personal computers. Some examples were given. The results were discussed.

  12. A medical image-based graphical platform -- features, applications and relevance for brachytherapy.

    PubMed

    Fonseca, Gabriel P; Reniers, Brigitte; Landry, Guillaume; White, Shane; Bellezzo, Murillo; Antunes, Paula C G; de Sales, Camila P; Welteman, Eduardo; Yoriyaz, Hélio; Verhaegen, Frank

    2014-01-01

    Brachytherapy dose calculation is commonly performed using the Task Group-No 43 Report-Updated protocol (TG-43U1) formalism. Recently, a more accurate approach has been proposed that can handle tissue composition, tissue density, body shape, applicator geometry, and dose reporting either in media or water. Some model-based dose calculation algorithms are based on Monte Carlo (MC) simulations. This work presents a software platform capable of processing medical images and treatment plans, and preparing the required input data for MC simulations. The A Medical Image-based Graphical platfOrm-Brachytherapy module (AMIGOBrachy) is a user interface, coupled to the MCNP6 MC code, for absorbed dose calculations. The AMIGOBrachy was first validated in water for a high-dose-rate (192)Ir source. Next, dose distributions were validated in uniform phantoms consisting of different materials. Finally, dose distributions were obtained in patient geometries. Results were compared against a treatment planning system including a linear Boltzmann transport equation (LBTE) solver capable of handling nonwater heterogeneities. The TG-43U1 source parameters are in good agreement with literature with more than 90% of anisotropy values within 1%. No significant dependence on the tissue composition was observed comparing MC results against an LBTE solver. Clinical cases showed differences up to 25%, when comparing MC results against TG-43U1. About 92% of the voxels exhibited dose differences lower than 2% when comparing MC results against an LBTE solver. The AMIGOBrachy can improve the accuracy of the TG-43U1 dose calculation by using a more accurate MC dose calculation algorithm. The AMIGOBrachy can be incorporated in clinical practice via a user-friendly graphical interface. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  13. Energy-optimal electrical excitation of nerve fibers.

    PubMed

    Jezernik, Saso; Morari, Manfred

    2005-04-01

    We derive, based on an analytical nerve membrane model and optimal control theory of dynamical systems, an energy-optimal stimulation current waveform for electrical excitation of nerve fibers. Optimal stimulation waveforms for nonleaky and leaky membranes are calculated. The case with a leaky membrane is a realistic case. Finally, we compare the waveforms and energies necessary for excitation of a leaky membrane in the case where the stimulation waveform is a square-wave current pulse, and in the case of energy-optimal stimulation. The optimal stimulation waveform is an exponentially rising waveform and necessitates considerably less energy to excite the nerve than a square-wave pulse (especially true for larger pulse durations). The described theoretical results can lead to drastically increased battery lifetime and/or decreased energy transmission requirements for implanted biomedical systems.

  14. Calculation of optical parameters for covalent binary alloys used in optical memories/solar cells: a modified approach

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Promod K.; Gupta, Poonam; Singh, Laxman

    2001-06-01

    Chalcogenide based alloys find applications in a number of devices like optical memories, IR detectors, optical switches, photovoltaics, compound semiconductor heterosrtuctures etc. We have modified the Gurman's statistical thermodynamic model (STM) of binary covalent alloys. In the Gurman's model, entropy calculations are based on the number of structural units present. The need to modify this model arose due to the fact that it gives equal probability for all the tetrahedra present in the alloy. We have modified the Gurman's model by introducing the concept that the entropy is based on the bond arrangement rather than that on the structural units present. In the present work calculation based on this modification have been presented for optical properties, which find application in optical switching/memories, solar cells and other optical devices. It has been shown that the calculated optical parameters (for a typical case of GaxSe1-x) based on modified model are closer to the available experimental results. These parameters include refractive index, extinction coefficient, dielectric functions, optical band gap etc. GaxSe1-x has been found to be suitable for reversible optical memories also, where phase change (a yields c and vice versa) takes place at specified physical conditions. DTA/DSC studies also suggest the suitability of this material for optical switching/memory applications. We have also suggested possible use of GaxSe1-x (x = 0.4) in place of oxide layer in a Metal - Oxide - Semiconductor type solar cells. The new structure is Metal - Ga2Se3 - GaAs. The I-V characteristics and other parameters calculated for this structure are found to be much better than that for Si based solar cells. Maximum output power is obtained at the intermediate layer thickness approximately 40 angstroms for this typical solar cell.

  15. Electron and donor-impurity-related Raman scattering and Raman gain in triangular quantum dots under an applied electric field

    NASA Astrophysics Data System (ADS)

    Tiutiunnyk, Anton; Akimov, Volodymyr; Tulupenko, Viktor; Mora-Ramos, Miguel E.; Kasapoglu, Esin; Morales, Alvaro L.; Duque, Carlos Alberto

    2016-04-01

    The differential cross-section of electron Raman scattering and the Raman gain are calculated and analysed in the case of prismatic quantum dots with equilateral triangle base shape. The study takes into account their dependencies on the size of the triangle, the influence of externally applied electric field as well as the presence of an ionized donor center located at the triangle's orthocenter. The calculations are made within the effective mass and parabolic band approximations, with a diagonalization scheme being applied to obtain the eigenfunctions and eigenvalues of the x- y Hamiltonian. The incident and secondary (scattered) radiation have been considered linearly-polarized along the y-direction, coinciding with the direction of the applied electric field. For the case with an impurity center, Raman scattering with the intermediate state energy below the initial state one has been found to show maximum differential cross-section more than by an order of magnitude bigger than that resulting from the scheme with lower intermediate state energy. The Raman gain has maximum magnitude around 35 nm dot size and electric field of 40 kV/cm for the case without impurity and at maximum considered values of the input parameters for the case with impurity. Values of Raman gain of the order of up to 104cm-1 are predicted in both cases.

  16. Rituximab as first choice for patients with refractory rheumatoid arthritis: cost-effectiveness analysis in Iran based on a systematic review and meta-analysis.

    PubMed

    Ahmadiani, Saeed; Nikfar, Shekoufeh; Karimi, Somayeh; Jamshidi, Ahmad Reza; Akbari-Sari, Ali; Kebriaeezadeh, Abbas

    2016-09-01

    The aim of this study was to evaluate the effectiveness and cost-effectiveness of using rituximab as first line for patients with refractory rheumatoid arthritis in comparison with continuing conventional DMARDs, from a perspective of health service governors. A systematic review was implemented through searching PubMed, Scopus and Cochrane Library. Quality assessment was performed by Jadad scale. After meta-analysis of ACR index results, QALY gain was calculated through mapping ACR index to HAQ and utility index. To measure the direct and indirect medical costs, a set of interviews with patients were applied. Thirty-two patients were selected from three referral rheumatology clinics in Tehran with definite diagnosis of refractory rheumatoid arthritis in the year before and treatment regimen of either rituximab or DMARDs within last year. Incremental cost-effectiveness ratio was calculated for base case and scenario of generic rituximab. Threefold of GDP per capita was considered as threshold of cost-effectiveness. Four studies were eligible to be considered in this systematic review. Total risk differences of 0.3 for achieving ACR20 criteria, 0.21 for ACR50 and 0.1 for ACR70 were calculated. Also mean of total medical costs of patients for 24 weeks were $3985 in rituximab group and $932 for DMARDs group. Thus, the incremental cost per QALY ratio will be $45,900-$70,223 in base case and $32,386-$49,550 for generic scenario. Rituximab for treatment of patients with refractory rheumatoid arthritis is not considered as cost-effective in Iran in none of the scenarios.

  17. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy

    NASA Astrophysics Data System (ADS)

    Giménez-Alventosa, Vicent; Antunes, Paula C. G.; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-01

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  18. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy.

    PubMed

    Giménez-Alventosa, Vicent; Antunes, Paula C G; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-07

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  19. The accuracy of a 2D and 3D dendritic tip scaling parameter in predicting the columnar to equiaxed transition (CET)

    NASA Astrophysics Data System (ADS)

    Seredyński, M.; Rebow, M.; Banaszek, J.

    2016-09-01

    The dendrite tip kinetics model accuracy relies on the reliability of the stability constant used, which is usually experimentally determined for 3D situations and applied to 2D models. The paper reports authors' attempts to cure the situation by deriving 2D dendritic tip scaling parameter for aluminium-based alloy: Al-4wt%Cu. The obtained parameter is then incorporated into the KGT dendritic growth model in order to compare it with the original 3D KGT counterpart and to derive two-dimensional and three-dimensional versions of the modified Hunt's analytical model for the columnar-to-equiaxed transition (CET). The conclusions drawn from the above analysis are further confirmed through numerical calculations of the two cases of Al-4wt%Cu metallic alloy solidification using the front tracking technique. Results, including the porous zone-under-cooled liquid front position, the calculated solutal under-cooling and a new predictor of the relative tendency to form an equiaxed zone, are shown, compared and discussed two numerical cases. The necessity to calculate sufficiently precise values of the tip scaling parameter in 2D and 3D is stressed.

  20. PHASEGO: A toolkit for automatic calculation and plot of phase diagram

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li

    2015-06-01

    The PHASEGO package extracts the Helmholtz free energy from the phonon density of states obtained by the first-principles calculations. With the help of equation of states fitting, it reduces the Gibbs free energy as a function of pressure/temperature at fixed temperature/pressure. Based on the quasi-harmonic approximation (QHA), it calculates the possible phase boundaries among all the structures of interest and finally plots the phase diagram automatically. For the single phase analysis, PHASEGO can numerically derive many properties, such as the thermal expansion coefficients, the bulk moduli, the heat capacities, the thermal pressures, the Hugoniot pressure-volume-temperature relations, the Grüneisen parameters, and the Debye temperatures. In order to check its ability of phase transition analysis, I present here two examples: semiconductor GaN and metallic Fe. In the case of GaN, PHASEGO automatically determined and plotted the phase boundaries among the provided zinc blende (ZB), wurtzite (WZ) and rocksalt (RS) structures. In the case of Fe, the results indicate that at high temperature the electronic thermal excitation free energy corrections considerably alter the phase boundaries among the body-centered cubic (bcc), face-centered cubic (fcc) and hexagonal close-packed (hcp) structures.

  1. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model.

    PubMed

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-28

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  2. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-01

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  3. Simulating irradiation hardening in tungsten under fast neutron irradiation including Re production by transmutation

    NASA Astrophysics Data System (ADS)

    Huang, Chen-Hsi; Gilbert, Mark R.; Marian, Jaime

    2018-02-01

    Simulations of neutron damage under fusion energy conditions must capture the effects of transmutation, both in terms of accurate chemical inventory buildup as well as the physics of the interactions between transmutation elements and irradiation defect clusters. In this work, we integrate neutronics, primary damage calculations, molecular dynamics results, Re transmutation calculations, and stochastic cluster dynamics simulations to study neutron damage in single-crystal tungsten to mimic divertor materials. To gauge the accuracy and validity of the simulations, we first study the material response under experimental conditions at the JOYO fast reactor in Japan and the High Flux Isotope Reactor at Oak Ridge National Laboratory, for which measurements of cluster densities and hardening levels up to 2 dpa exist. We then provide calculations under expected DEMO fusion conditions. Several key mechanisms involving Re atoms and defect clusters are found to govern the accumulation of irradiation damage in each case. We use established correlations to translate damage accumulation into hardening increases and compare our results to the experimental measurements. We find hardening increases in excess of 5000 MPa in all cases, which casts doubts about the integrity of W-based materials under long-term fusion exposure.

  4. Accelerating and focusing protein-protein docking correlations using multi-dimensional rotational FFT generating functions.

    PubMed

    Ritchie, David W; Kozakov, Dima; Vajda, Sandor

    2008-09-01

    Predicting how proteins interact at the molecular level is a computationally intensive task. Many protein docking algorithms begin by using fast Fourier transform (FFT) correlation techniques to find putative rigid body docking orientations. Most such approaches use 3D Cartesian grids and are therefore limited to computing three dimensional (3D) translational correlations. However, translational FFTs can speed up the calculation in only three of the six rigid body degrees of freedom, and they cannot easily incorporate prior knowledge about a complex to focus and hence further accelerate the calculation. Furthemore, several groups have developed multi-term interaction potentials and others use multi-copy approaches to simulate protein flexibility, which both add to the computational cost of FFT-based docking algorithms. Hence there is a need to develop more powerful and more versatile FFT docking techniques. This article presents a closed-form 6D spherical polar Fourier correlation expression from which arbitrary multi-dimensional multi-property multi-resolution FFT correlations may be generated. The approach is demonstrated by calculating 1D, 3D and 5D rotational correlations of 3D shape and electrostatic expansions up to polynomial order L=30 on a 2 GB personal computer. As expected, 3D correlations are found to be considerably faster than 1D correlations but, surprisingly, 5D correlations are often slower than 3D correlations. Nonetheless, we show that 5D correlations will be advantageous when calculating multi-term knowledge-based interaction potentials. When docking the 84 complexes of the Protein Docking Benchmark, blind 3D shape plus electrostatic correlations take around 30 minutes on a contemporary personal computer and find acceptable solutions within the top 20 in 16 cases. Applying a simple angular constraint to focus the calculation around the receptor binding site produces acceptable solutions within the top 20 in 28 cases. Further constraining the search to the ligand binding site gives up to 48 solutions within the top 20, with calculation times of just a few minutes per complex. Hence the approach described provides a practical and fast tool for rigid body protein-protein docking, especially when prior knowledge about one or both binding sites is available.

  5. Clinical applications of advanced rotational radiation therapy

    NASA Astrophysics Data System (ADS)

    Nalichowski, Adrian

    Purpose: With a fast adoption of emerging technologies, it is critical to fully test and understand its limits and capabilities. In this work we investigate new graphic processing unit (GPU) based treatment planning algorithm and its applications in helical tomotherapy dose delivery. We explore the limits of the system by applying it to challenging clinical cases of total marrow irradiation (TMI) and stereotactic radiosurgery (SRS). We also analyze the feasibility of alternative fractionation schemes for total body irradiation (TBI) and TMI based on reported historical data on lung dose and interstitial pneumonitis (IP) incidence rates. Methods and Materials: An anthropomorphic phantom was used to create TMI plans using the new GPU based treatment planning system and the existing CPU cluster based system. Optimization parameters were selected based on clinically used values for field width, modulation factor and pitch. Treatment plans were also created on Eclipse treatment planning system (Varian Medical Systems Inc, Palo Alto, CA) using volumetric modulated arc therapy (VMAT) for dose delivery on IX treatment unit. A retrospective review was performed of 42 publications that reported IP rates along with lung dose, fractionation regimen, dose rate and chemotherapy. The analysis consisted of nearly thirty two hundred patients and 34 unique radiation regimens. Multivariate logistic regression was performed to determine parameters associated with IP and establish does response function. Results: The results showed very good dosimetric agreement between the GPU and CPU calculated plans. The results from SBRT study show that GPU planning system can maintain 90% target coverage while meeting all the constraints of RTOG 0631 protocol. Beam on time for Tomotherapy and flattening filter free RapidArc was much faster than for Vero or Cyberknife. Retrospective data analysis showed that lung dose and Cyclophosphomide (Cy) are both predictors of IP in TBI/TMI treatments. The dose rate was not found to be an independent risk factor for IP. The model failed to establish accurate dose response function, but the discrete data indicated a radiation dose threshold of 7.6Gy (EQD2_repair) and 120 mg/kg of Cy below which no IP cases were reported. Conclusion: The TomoTherapy GPU based dose engine is capable of calculating TMI treatment plans with plan quality nearly identical to plans calculated using the traditional CPU/cluster based system, while significantly reducing the time required for optimization and dose calculation. The new system was able to achieve more uniform dose distribution throughout the target volume and steeper dose fall off, resulting in superior OAR sparing when compared to Eclipse treatment planning system for VMAT delivery. The machine optimization parameters tested for TMI cases provide a comprehensive overview of the capabilities of the treatment planning station and associated helical delivery system. The new system also proved to be dosimetrically compatible with other leading modalities for treatments of small and complicated target volumes and was even superior when treatment delivery times were compared. These finding demonstrate that the advanced treatment planning and delivery system from TomoTherapy is well suitable for treatments of complicated cases such as TMI and SRS and it's often dosimetrically and/or logistically superior to other modalities. The new planning system can easily meet the constraint of threshold lung dose established in this study. The results presented here on the capabilities of Tomotherapy and on the identified lung dose threshold provide an opportunity to explore alternative fractionation schemes without sacrificing target coverage or lung toxicity. (Abstract shortened by ProQuest.).

  6. A computer program for the calculation of the flow field in supersonic mixed-compression inlets at angle of attack using the three-dimensional method of characteristics with discrete shock wave fitting

    NASA Technical Reports Server (NTRS)

    Vadyak, J.; Hoffman, J. D.; Bishop, A. R.

    1978-01-01

    The calculation procedure is based on the method of characteristics for steady three-dimensional flow. The bow shock wave and the internal shock wave system were computed using a discrete shock wave fitting procedure. The general structure of the computer program is discussed, and a brief description of each subroutine is given. All program input parameters are defined, and a brief discussion on interpretation of the output is provided. A number of sample cases, complete with data deck listings, are presented.

  7. Enhanced analysis and users manual for radial-inflow turbine conceptual design code RTD

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    Modeling enhancements made to a radial-inflow turbine conceptual design code are documented in this report. A stator-endwall clearance-flow model was added for use with pivoting vanes. The rotor calculations were modified to account for swept blades and splitter blades. Stator and rotor trailing-edge losses and a vaneless-space loss were added to the loss model. Changes were made to the disk-friction and rotor-clearance loss calculations. The loss model was then calibrated based on experimental turbine performance. A complete description of code input and output along with sample cases are included in the report.

  8. Determination of the absolute configuration of two estrogenic nonylphenols in solution by chiroptical methods

    NASA Astrophysics Data System (ADS)

    Reinscheid, Uwe M.

    2009-01-01

    The absolute configurations of two estrogenic nonylphenols were determined in solution. Both nonylphenols, NP35 and NP112 could not be crystallized so that only solution methods are able to solve directly the question of absolute configuration. The conclusion based on experimental and calculated optical rotation and VCD data for the nonylphenol NP35 was independently confirmed by another study using a camphanoyl derivative and X-ray analysis of the obtained crystals. In case of NP112, the experimental rotation data are inconclusive. However, the comparison between experimental and calculated VCD data allowed the determination of the absolute configuration.

  9. A quantum framework for likelihood ratios

    NASA Astrophysics Data System (ADS)

    Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.

    The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.

  10. Evaluation of radiation loading on finite cylindrical shells using the fast Fourier transform: A comparison with direct numerical integration.

    PubMed

    Liu, S X; Zou, M S

    2018-03-01

    The radiation loading on a vibratory finite cylindrical shell is conventionally evaluated through the direct numerical integration (DNI) method. An alternative strategy via the fast Fourier transform algorithm is put forward in this work based on the general expression of radiation impedance. To check the feasibility and efficiency of the proposed method, a comparison with DNI is presented through numerical cases. The results obtained using the present method agree well with those calculated by DNI. More importantly, the proposed calculating strategy can significantly save the time cost compared with the conventional approach of straightforward numerical integration.

  11. Doping of AlxGa1-xN

    NASA Astrophysics Data System (ADS)

    Stampfl, C.; Van de Walle, Chris G.

    1998-01-01

    N-type AlxGa1-xN exhibits a dramatic decrease in the free-carrier concentration for x⩾0.40. Based on first-principles calculations, we propose that two effects are responsible for this behavior: (i) in the case of doping with oxygen (the most common unintentional donor), a DX transition occurs, which converts the shallow donor into a deep level; and (ii) compensation by the cation vacancy (VGa or VAl), a triple acceptor, increases with alloy composition x. For p-type doping, the calculations indicate that the doping efficiency decreases due to compensation by the nitrogen vacancy. In addition, an increase in the acceptor ionization energy is found with increasing x.

  12. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration curve.

  13. Application of Risk-Based Inspection method for gas compressor station

    NASA Astrophysics Data System (ADS)

    Zhang, Meng; Liang, Wei; Qiu, Zeyang; Lin, Yang

    2017-05-01

    According to the complex process and lots of equipment, there are risks in gas compressor station. At present, research on integrity management of gas compressor station is insufficient. In this paper, the basic principle of Risk Based Inspection (RBI) and the RBI methodology are studied; the process of RBI in the gas compressor station is developed. The corrosion loop and logistics loop of the gas compressor station are determined through the study of corrosion mechanism and process of the gas compressor station. The probability of failure is calculated by using the modified coefficient, and the consequence of failure is calculated by the quantitative method. In particular, we addressed the application of a RBI methodology in a gas compressor station. The risk ranking is helpful to find the best preventive plan for inspection in the case study.

  14. A novel Gravity-FREAK feature extraction and Gravity-KLT tracking registration algorithm based on iPhone MEMS mobile sensor in mobile environment

    PubMed Central

    Lin, Fan; Xiao, Bin

    2017-01-01

    Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment. PMID:29088228

  15. A novel Gravity-FREAK feature extraction and Gravity-KLT tracking registration algorithm based on iPhone MEMS mobile sensor in mobile environment.

    PubMed

    Hong, Zhiling; Lin, Fan; Xiao, Bin

    2017-01-01

    Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment.

  16. A Method for Calculating the Mean Orbits of Meteor Streams

    NASA Astrophysics Data System (ADS)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  17. Look Before You Leap: What Are the Obstacles to Risk Calculation in the Equestrian Sport of Eventing?

    PubMed Central

    O’Brien, Denzil

    2016-01-01

    Simple Summary This paper examines a number of methods for calculating injury risk for riders in the equestrian sport of eventing, and suggests that the primary locus of risk is the action of the horse jumping, and the jump itself. The paper argues that risk calculation should therefore focus first on this locus. Abstract All horse-riding is risky. In competitive horse sports, eventing is considered the riskiest, and is often characterised as very dangerous. But based on what data? There has been considerable research on the risks and unwanted outcomes of horse-riding in general, and on particular subsets of horse-riding such as eventing. However, there can be problems in accessing accurate, comprehensive and comparable data on such outcomes, and in using different calculation methods which cannot compare like with like. This paper critically examines a number of risk calculation methods used in estimating risk for riders in eventing, including one method which calculates risk based on hours spent in the activity and in one case concludes that eventing is more dangerous than motorcycle racing. This paper argues that the primary locus of risk for both riders and horses is the jump itself, and the action of the horse jumping. The paper proposes that risk calculation in eventing should therefore concentrate primarily on this locus, and suggests that eventing is unlikely to be more dangerous than motorcycle racing. The paper proposes avenues for further research to reduce the likelihood and consequences of rider and horse falls at jumps. PMID:26891334

  18. Use of Patients With Diarrhea Who Test Negative for Rotavirus as Controls to Estimate Rotavirus Vaccine Effectiveness Through Case-Control Studies.

    PubMed

    Tate, Jacqueline E; Patel, Manish M; Cortese, Margaret M; Payne, Daniel C; Lopman, Benjamin A; Yen, Catherine; Parashar, Umesh D

    2016-05-01

    Case-control studies are often performed to estimate postlicensure vaccine effectiveness (VE), but the enrollment of controls can be challenging, time-consuming, and costly. We evaluated whether children enrolled in the same hospital-based diarrheal surveillance used to identify rotavirus cases but who test negative for rotavirus (test-negative controls) can be considered a suitable alternative to nondiarrheal hospital or community-based control groups (traditional controls). We compared calculated VE estimates as a function of varying values of true VE, attack rates of rotavirus and nonrotavirus diarrhea in the population, and sensitivity and specificity of the rotavirus enzyme immunoasssay. We also searched the literature to identify rotavirus VE studies that used traditional and test-negative control groups and compared VE estimates obtained using the different control groups. Assuming a 1% attack rate for severe rotavirus diarrhea, a 3% attack rate for severe nonrotavirus diarrhea in the population, a test sensitivity of 96%, and a specificity of 100%, the calculated VE estimates using both the traditional and test-negative control groups closely approximated the true VE for all values from 30% to 100%. As true VE decreased, the traditional case-control approach slightly overestimated the true VE and the test-negative case-control approach slightly underestimated this estimate, but the absolute difference was only ±0.2 percentage points. Field VE estimates from 10 evaluations that used both traditional and test-negative control groups were similar regardless of control group used. The use of rotavirus test-negative controls offers an efficient and cost-effective approach to estimating rotavirus VE through case-control studies. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  19. Volumetric three-component velocimetry measurements of the turbulent flow around a Rushton turbine

    NASA Astrophysics Data System (ADS)

    Sharp, Kendra V.; Hill, David; Troolin, Daniel; Walters, Geoffrey; Lai, Wing

    2010-01-01

    Volumetric three-component velocimetry measurements have been taken of the flow field near a Rushton turbine in a stirred tank reactor. This particular flow field is highly unsteady and three-dimensional, and is characterized by a strong radial jet, large tank-scale ring vortices, and small-scale blade tip vortices. The experimental technique uses a single camera head with three apertures to obtain approximately 15,000 three-dimensional vectors in a cubic volume. These velocity data offer the most comprehensive view to date of this flow field, especially since they are acquired at three Reynolds numbers (15,000, 107,000, and 137,000). Mean velocity fields and turbulent kinetic energy quantities are calculated. The volumetric nature of the data enables tip vortex identification, vortex trajectory analysis, and calculation of vortex strength. Three identification methods for the vortices are compared based on: the calculation of circumferential vorticity; the calculation of local pressure minima via an eigenvalue approach; and the calculation of swirling strength again via an eigenvalue approach. The use of two-dimensional data and three-dimensional data is compared for vortex identification; a `swirl strength' criterion is less sensitive to completeness of the velocity gradient tensor and overall provides clearer identification of the tip vortices. The principal components of the strain rate tensor are also calculated for one Reynolds number case as these measures of stretching and compression have recently been associated with tip vortex characterization. Vortex trajectories and strength compare favorably with those in the literature. No clear dependence of trajectory on Reynolds number is deduced. The visualization of tip vortices up to 140° past blade passage in the highest Reynolds number case is notable and has not previously been shown.

  20. The clustering-based case-based reasoning for imbalanced business failure prediction: a hybrid approach through integrating unsupervised process with supervised process

    NASA Astrophysics Data System (ADS)

    Li, Hui; Yu, Jun-Ling; Yu, Le-An; Sun, Jie

    2014-05-01

    Case-based reasoning (CBR) is one of the main forecasting methods in business forecasting, which performs well in prediction and holds the ability of giving explanations for the results. In business failure prediction (BFP), the number of failed enterprises is relatively small, compared with the number of non-failed ones. However, the loss is huge when an enterprise fails. Therefore, it is necessary to develop methods (trained on imbalanced samples) which forecast well for this small proportion of failed enterprises and performs accurately on total accuracy meanwhile. Commonly used methods constructed on the assumption of balanced samples do not perform well in predicting minority samples on imbalanced samples consisting of the minority/failed enterprises and the majority/non-failed ones. This article develops a new method called clustering-based CBR (CBCBR), which integrates clustering analysis, an unsupervised process, with CBR, a supervised process, to enhance the efficiency of retrieving information from both minority and majority in CBR. In CBCBR, various case classes are firstly generated through hierarchical clustering inside stored experienced cases, and class centres are calculated out by integrating cases information in the same clustered class. When predicting the label of a target case, its nearest clustered case class is firstly retrieved by ranking similarities between the target case and each clustered case class centre. Then, nearest neighbours of the target case in the determined clustered case class are retrieved. Finally, labels of the nearest experienced cases are used in prediction. In the empirical experiment with two imbalanced samples from China, the performance of CBCBR was compared with the classical CBR, a support vector machine, a logistic regression and a multi-variant discriminate analysis. The results show that compared with the other four methods, CBCBR performed significantly better in terms of sensitivity for identifying the minority samples and generated high total accuracy meanwhile. The proposed approach makes CBR useful in imbalanced forecasting.

  1. Calculation of unsteady aerodynamics for four AGARD standard aeroelastic configurations

    NASA Technical Reports Server (NTRS)

    Bland, S. R.; Seidel, D. A.

    1984-01-01

    Calculated unsteady aerodynamic characteristics for four Advisory Group for Aeronautical Research Development (AGARD) standard aeroelastic two-dimensional airfoils and for one of the AGARD three-dimensional wings are reported. Calculations were made using the finite-difference codes XTRAN2L (two-dimensional flow) and XTRAN3S (three-dimensional flow) which solve the transonic small disturbance potential equations. Results are given for the 36 AGARD cases for the NACA 64A006, NACA 64A010, and NLR 7301 airfoils with experimental comparisons for most of these cases. Additionally, six of the MBB-A3 airfoil cases are included. Finally, results are given for three of the cases for the rectangular wing.

  2. A method for analyzing the business case for provider participation in the National Cancer Institute's Community Clinical Oncology Program and similar federally funded, provider-based research networks.

    PubMed

    Reiter, Kristin L; Song, Paula H; Minasian, Lori; Good, Marjorie; Weiner, Bryan J; McAlearney, Ann Scheck

    2012-09-01

    The Community Clinical Oncology Program (CCOP) plays an essential role in the efforts of the National Cancer Institute (NCI) to increase enrollment in clinical trials. Currently, there is little practical guidance in the literature to assist provider organizations in analyzing the return on investment (ROI), or business case, for establishing and operating a provider-based research network (PBRN) such as the CCOP. In this article, the authors present a conceptual model of the business case for PBRN participation, a spreadsheet-based tool and advice for evaluating the business case for provider participation in a CCOP organization. A comparative, case-study approach was used to identify key components of the business case for hospitals attempting to support a CCOP research infrastructure. Semistructured interviews were conducted with providers and administrators. Key themes were identified and used to develop the financial analysis tool. Key components of the business case included CCOP start-up costs, direct revenue from the NCI CCOP grant, direct expenses required to maintain the CCOP research infrastructure, and incidental benefits, most notably downstream revenues from CCOP patients. The authors recognized the value of incidental benefits as an important contributor to the business case for CCOP participation; however, currently, this component is not calculated. The current results indicated that providing a method for documenting the business case for CCOP or other PBRN involvement will contribute to the long-term sustainability and expansion of these programs by improving providers' understanding of the financial implications of participation. Copyright © 2011 American Cancer Society.

  3. Quantification by aberration corrected (S)TEM of boundaries formed by symmetry breaking phase transformations.

    PubMed

    Schryvers, D; Salje, E K H; Nishida, M; De Backer, A; Idrissi, H; Van Aert, S

    2017-05-01

    The present contribution gives a review of recent quantification work of atom displacements, atom site occupations and level of crystallinity in various systems and based on aberration corrected HR(S)TEM images. Depending on the case studied, picometer range precisions for individual distances can be obtained, boundary widths at the unit cell level determined or statistical evolutions of fractions of the ordered areas calculated. In all of these cases, these quantitative measures imply new routes for the applications of the respective materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Change in Reported Lyme Disease Incidence in the Northeast and Upper Midwest, 1991-2014

    EPA Pesticide Factsheets

    This indicator shows how reported Lyme disease incidence has changed by state since 1991, based on the number of new cases per 100,000 people. The total change has been estimated from the average annual rate of change in each state. This map is limited to the 14 states where Lyme disease is most common, where annual rates are consistently above 10 cases per 100,000. Connecticut, New York, and Rhode Island had too much year-to-year variation in reporting practices to allow trend calculation. For more information: www.epa.gov/climatechange/science/indicators

  5. [OR minute myth : Guidelines for calculation of DRG revenues per OR minute].

    PubMed

    Waeschle, R M; Hinz, J; Bleeker, F; Sliwa, B; Popov, A; Schmidt, C E; Bauer, M

    2016-02-01

    The economic situation in German Hospitals is tense and needs the implementation of differentiated controlling instruments. Accordingly, parameters of revenue development of different organizational units within a hospital are needed. This is particularly necessary in the revenue and cost-intensive operating theater field. So far there are only barely established productivity data for the control of operating room (OR) revenues during the year available. This article describes a valid method for the calculation of case-related revenues per OR minute conform to the diagnosis-related groups (DRG).For this purpose the relevant datasets from the OR information system and the § 21 productivity report (DRG grouping) of the University Medical Center Göttingen were combined. The revenues defined in the DRG browser of the Institute for Hospital Reimbursement (InEK) were assigned to the corresponding process times--incision-suture time (SNZ), operative preparation time and anesthesiology time--according to the InEK system. All full time stationary DRG cases treated within the OR were included and differentiated according to the surgical department responsible. The cost centers "OR section" and "anesthesia" were isolated to calculate the revenues of the operating theater. SNZ clusters and cost type groups were formed to demonstrate their impact on the revenues per OR minute. A surgical personal simultaneity factor (GZF) was calculated by division of the revenues for surgeons and anesthesiologists. This factor resembles the maximum DRG financed personnel deployment for surgeons in German hospitals.The revenue per OR minute including all cost types and DRG was 16.63 €/min. The revenues ranged from 10.45 to 24.34 €/min depending on the surgical field. The revenues were stable when SNZ clusters were analyzed. The differentiation of cost type groups revealed a revenue reduction especially after exclusion of revenues for implants and infrastructure. The calculated GZF over all surgical departments was 2.2 (range 1.9-3.6). A calculation of this factor at the DRG level can give economically relevant information about the case-related personnel deployment.This analysis shows for the first time the DRG-conform calculation of revenues per OR minute. There is a strong dependency on the considered cost type and the performing surgical field. Repetitive analyses are necessary due to the lack of reference values and are a suitable tool to monitor the revenue development after measures for process optimization. Comparative analyses within different surgical fields on this data base should be avoided. The demonstrated method can be used as a guideline for other hospitals to calculate the DRG revenues within the OR. This enables pursuing cost-effectiveness analysis by comparing these revenues with cost data from the cost unit accounting at a DRG or case level.

  6. Safe bunker designing for the 18 MV Varian 2100 Clinac: a comparison between Monte Carlo simulation based upon data and new protocol recommendations.

    PubMed

    Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein

    2016-01-01

    The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. From designed door's thickness, the door designed by the MC simulation and Wu-McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations.

  7. Magneto-transport properties of a two-dimensional electron gas under lateral periodic modulation

    NASA Astrophysics Data System (ADS)

    Shi, Qinwei

    Several physical systems related to two-dimensional electron gas (2DEG) subjected to an electric or a magnetic modulation at various strength have been theoretically studied. In Chapter 3, a quantum transport theory is developed for the calculation of magnetoresistance rhoxx in a 2DEG subjected to strong one-dimensional periodic potential and at low uniform magnetic field (the Weiss oscillations regime). The theory is based on the exact diagonalization of the Hamiltonian and the constant relaxation time approximation. The theoretical predictions are in good agreement with the experimental results. The discrepancy between the classical calculation and the experiment is removed in our quantum treatment. In particular, the quenching of the Weiss oscillations is understood in this framework. In Chapter 4, the non-perturbative method for electric modulated system (EMS) is used to calculate the magnetoresistance rhoxx for a magnetic modulated system (MMS), which is a 2DEG subjected to strong one-dimensional periodic magnetic modulation and at low uniform magnetic field. As the amplitude of magnetic modulation increases we first find a quenching of the low fields oscillations. This is similar to the quenching of the Weiss oscillations in the EMS case. As the strength of the magnetic modulation increases further, a new series of oscillations appears in our calculation. The temperature dependence of these new oscillations shows that the basic mechanism of these oscillations is similar to Weiss oscillations, and the origin can be identified with the extra term in the Hamiltonian for the MMS case. In Chapter 5, a self-consistent quantum transport theory is developed to calculate magnetocoductivities in a 2DEG subjected to strong one-dimensional periodic potential and at high uniform magnetic field (SdH oscillation regime). The theory is based on the self-consistent Born approximation (SCBA) for the randomly distributed short-range impurities together with an exact diagonalization of the Hamiltonian. Quantum oscillations of magneto conductivities as a function of the amplitude of electric modulation are calculated and the basic mechanism behind these oscillations is discussed. In chapter 6, a tight-binding model is used to discuss the energy spectrum of 2DEG subjected to a strong two-dimensional magnetic modulation and a uniform magnetic field corresponding to a rational value of magnetic flux per unit cell f=pqf0. Some symmetries broken in the case of one-dimensional magnetic modulation are recovered in the two-dimensional case. Furthermore, when q is even, the magnetic Bloch band is broken into q subbands; while for odd q, the magnetic Bloch band is broken into 2 q subbands. This has interesting implication on the magnetotransport properties as one changes f . Our energy spectrum is similar but more complex than the Hofstadter's butterfly. Some suggestions to observe the new fractal energy spectrum are made.

  8. Modeling of Continuum Manipulators Using Pythagorean Hodograph Curves.

    PubMed

    Singh, Inderjeet; Amara, Yacine; Melingui, Achille; Mani Pathak, Pushparaj; Merzouki, Rochdi

    2018-05-10

    Research on continuum manipulators is increasingly developing in the context of bionic robotics because of their many advantages over conventional rigid manipulators. Due to their soft structure, they have inherent flexibility, which makes it a huge challenge to control them with high performances. Before elaborating a control strategy of such robots, it is essential to reconstruct first the behavior of the robot through development of an approximate behavioral model. This can be kinematic or dynamic depending on the conditions of operation of the robot itself. Kinematically, two types of modeling methods exist to describe the robot behavior; quantitative methods describe a model-based method, and qualitative methods describe a learning-based method. In kinematic modeling of continuum manipulator, the assumption of constant curvature is often considered to simplify the model formulation. In this work, a quantitative modeling method is proposed, based on the Pythagorean hodograph (PH) curves. The aim is to obtain a three-dimensional reconstruction of the shape of the continuum manipulator with variable curvature, allowing the calculation of its inverse kinematic model (IKM). It is noticed that the performances of the PH-based kinematic modeling of continuum manipulators are considerable regarding position accuracy, shape reconstruction, and time/cost of the model calculation, than other kinematic modeling methods, for two cases: free load manipulation and variable load manipulation. This modeling method is applied to the compact bionic handling assistant (CBHA) manipulator for validation. The results are compared with other IKMs developed in case of CBHA manipulator.

  9. Evaluation of the national Notifiable Diseases Surveillance System for dengue fever in Taiwan, 2010-2012.

    PubMed

    McKerr, Caoimhe; Lo, Yi-Chun; Edeghere, Obaghe; Bracebridge, Sam

    2015-03-01

    In Taiwan, around 1,500 cases of dengue fever are reported annually and incidence has been increasing over time. A national web-based Notifiable Diseases Surveillance System (NDSS) has been in operation since 1997 to monitor incidence and trends and support case and outbreak management. We present the findings of an evaluation of the NDSS to ascertain the extent to which dengue fever surveillance objectives are being achieved. We extracted the NDSS data on all laboratory-confirmed dengue fever cases reported during 1 January 2010 to 31 December 2012 to assess and describe key system attributes based on the Centers for Disease Control and Prevention surveillance evaluation guidelines. The system's structure and processes were delineated and operational staff interviewed using a semi-structured questionnaire. Crude and age-adjusted incidence rates were calculated and key demographic variables were summarised to describe reporting activity. Data completeness and validity were described across several variables. Of 5,072 laboratory-confirmed dengue fever cases reported during 2010-2012, 4,740 (93%) were reported during July to December. The system was judged to be simple due to its minimal reporting steps. Data collected on key variables were correctly formatted and usable in > 90% of cases, demonstrating good data completeness and validity. The information collected was considered relevant by users with high acceptability. Adherence to guidelines for 24-hour reporting was 99%. Of 720 cases (14%) recorded as travel-related, 111 (15%) had an onset >14 days after return, highlighting the potential for misclassification. Information on hospitalization was missing for 22% of cases. The calculated PVP was 43%. The NDSS for dengue fever surveillance is a robust, well maintained and acceptable system that supports the collection of complete and valid data needed to achieve the surveillance objectives. The simplicity of the system engenders compliance leading to timely and accurate reporting. Completeness of hospitalization information could be further improved to allow assessment of severity of illness. To minimize misclassification, an algorithm to accurately classify travel cases should be established.

  10. Radiative and thermodynamic responses to aerosol extinction profiles during the pre-monsoon month over South Asia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y.; Kotamarthi, V. R.; Coulter, R.

    Aerosol radiative effects and thermodynamic responses over South Asia are examined with the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) for March 2012. Model results of aerosol optical depths (AODs) and extinction profiles are analyzed and compared to satellite retrievals and two ground-based lidars located in northern India. The WRF-Chem model is found to heavily underestimate the AOD during the simulated pre-monsoon month and about 83 % of the model's low bias is due to aerosol extinctions below ~2 km. Doubling the calculated aerosol extinctions below 850 hPa generates much better agreement with the observed AOD and extinction profilesmore » averaged over South Asia. To separate the effect of absorption and scattering properties, two runs were conducted: in one run (Case I), the calculated scattering and absorption coefficients were increased proportionally, while in the second run (Case II) only the calculated aerosol scattering coefficient was increased. With the same AOD and extinction profiles, the two runs produce significantly different radiative effects over land and oceans. On the regional mean basis, Case I generates 48% more heating in the atmosphere and 21% more dimming at the surface than Case II. Case I also produces stronger cooling responses over the land from the longwave radiation adjustment and boundary layer mixing. These rapid adjustments offset the stronger radiative heating in Case I and lead to an overall lower-troposphere cooling up to -0.7K day −1, which is smaller than that in Case II. Over the ocean, direct radiative effects dominate the heating rate changes in the lower atmosphere lacking such surface and lower atmosphere adjustments due to fixed sea surface temperature, and the strongest atmospheric warming is obtained in Case I. Consequently, atmospheric dynamics (boundary layer heights and meridional circulation) and thermodynamic processes (water vapor and cloudiness) are shown to respond differently between Case I and Case II, underlining the importance of determining the exact portion of scattering or absorbing aerosols that lead to the underestimation of aerosol optical depth in the model. Additionally, the model results suggest that both the direct radiative effect and rapid thermodynamic responses need to be quantified for understanding aerosol radiative impacts.« less

  11. Radiative and thermodynamic responses to aerosol extinction profiles during the pre-monsoon month over South Asia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y.; Kotamarthi, V. R.; Coulter, R.

    Aerosol radiative effects and thermodynamic responses over South Asia are examined with a version of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) for March 2012. Model results of Aerosol Optical Depth (AOD) and extinction profiles are analyzed and compared to satellite retrievals and two ground-based lidars located in the northern India. The WRF-Chem model is found to underestimate the AOD during the simulated pre-monsoon month and about 83 % of the model low-bias is due to aerosol extinctions below ~2 km. Doubling the calculated aerosol extinctions below 850 hPa generates much better agreement with the observed AODmore » and extinction profiles averaged over South Asia. To separate the effect of absorption and scattering properties, two runs were conducted: in one run (Case I), the calculated scattering and absorption coefficients were increased proportionally, while in the second run (Case II) only the calculated aerosol scattering coefficient was increased. With the same AOD and extinction profiles, the two runs produce significantly different radiative effects over land and oceans. On the regional mean basis, Case I generates 48 % more heating in the atmosphere and 21 % more dimming at the surface than Case II. Case I also produces stronger cooling responses over the land from the longwave radiation adjustment and boundary layer mixing. These rapid adjustments offset the stronger radiative heating in Case I and lead to an overall lower-troposphere cooling up to -0.7 K day −1, which is smaller than that in Case II. Over the ocean, direct radiative effects dominate the heating rate changes in the lower atmosphere lacking such surface and lower atmosphere adjustments due to fixed sea surface temperature, and the strongest atmospheric warming is obtained in Case I. Consequently, atmospheric dynamics (boundary layer heights and meridional circulation) and thermodynamic processes (water vapor and cloudiness) are shown to respond differently between Case I and Case II underlying the importance of determining the exact portion of scattering or absorbing aerosols that lead to the underestimation of aerosol optical depth in the model. In addition, the model results suggest that both direct radiative effect and rapid thermodynamic responses need to be quantified for understanding aerosol radiative impacts.« less

  12. Radiative and thermodynamic responses to aerosol extinction profiles during the pre-monsoon month over South Asia

    DOE PAGES

    Feng, Y.; Kotamarthi, V. R.; Coulter, R.; ...

    2016-01-18

    Aerosol radiative effects and thermodynamic responses over South Asia are examined with the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) for March 2012. Model results of aerosol optical depths (AODs) and extinction profiles are analyzed and compared to satellite retrievals and two ground-based lidars located in northern India. The WRF-Chem model is found to heavily underestimate the AOD during the simulated pre-monsoon month and about 83 % of the model's low bias is due to aerosol extinctions below ~2 km. Doubling the calculated aerosol extinctions below 850 hPa generates much better agreement with the observed AOD and extinction profilesmore » averaged over South Asia. To separate the effect of absorption and scattering properties, two runs were conducted: in one run (Case I), the calculated scattering and absorption coefficients were increased proportionally, while in the second run (Case II) only the calculated aerosol scattering coefficient was increased. With the same AOD and extinction profiles, the two runs produce significantly different radiative effects over land and oceans. On the regional mean basis, Case I generates 48% more heating in the atmosphere and 21% more dimming at the surface than Case II. Case I also produces stronger cooling responses over the land from the longwave radiation adjustment and boundary layer mixing. These rapid adjustments offset the stronger radiative heating in Case I and lead to an overall lower-troposphere cooling up to -0.7K day −1, which is smaller than that in Case II. Over the ocean, direct radiative effects dominate the heating rate changes in the lower atmosphere lacking such surface and lower atmosphere adjustments due to fixed sea surface temperature, and the strongest atmospheric warming is obtained in Case I. Consequently, atmospheric dynamics (boundary layer heights and meridional circulation) and thermodynamic processes (water vapor and cloudiness) are shown to respond differently between Case I and Case II, underlining the importance of determining the exact portion of scattering or absorbing aerosols that lead to the underestimation of aerosol optical depth in the model. Additionally, the model results suggest that both the direct radiative effect and rapid thermodynamic responses need to be quantified for understanding aerosol radiative impacts.« less

  13. Radiative and thermodynamic responses to aerosol extinction profiles during the pre-monsoon month over South Asia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Y.; Kotamarthi, V. R.; Coulter, R.

    Aerosol radiative effects and thermodynamic responses over South Asia are examined with the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) for March 2012. Model results of aerosol optical depths (AODs) and extinction profiles are analyzed and compared to satellite retrievals and two ground-based lidars located in northern India. The WRF-Chem model is found to heavily underestimate the AOD during the simulated pre-monsoon month and about 83 % of the model's low bias is due to aerosol extinctions below ~2 km. Doubling the calculated aerosol extinctions below 850 hPa generates much better agreement with the observed AOD and extinction profilesmore » averaged over South Asia. To separate the effect of absorption and scattering properties, two runs were conducted: in one run (Case I), the calculated scattering and absorption coefficients were increased proportionally, while in the second run (Case II) only the calculated aerosol scattering coefficient was increased. With the same AOD and extinction profiles, the two runs produce significantly different radiative effects over land and oceans. On the regional mean basis, Case I generates 48 % more heating in the atmosphere and 21 % more dimming at the surface than Case II. Case I also produces stronger cooling responses over the land from the longwave radiation adjustment and boundary layer mixing. These rapid adjustments offset the stronger radiative heating in Case I and lead to an overall lower-troposphere cooling up to -0.7 K day −1, which is smaller than that in Case II. Over the ocean, direct radiative effects dominate the heating rate changes in the lower atmosphere lacking such surface and lower atmosphere adjustments due to fixed sea surface temperature, and the strongest atmospheric warming is obtained in Case I. Consequently, atmospheric dynamics (boundary layer heights and meridional circulation) and thermodynamic processes (water vapor and cloudiness) are shown to respond differently between Case I and Case II, underlining the importance of determining the exact portion of scattering or absorbing aerosols that lead to the underestimation of aerosol optical depth in the model. In addition, the model results suggest that both the direct radiative effect and rapid thermodynamic responses need to be quantified for understanding aerosol radiative impacts.« less

  14. Radiative and thermodynamic responses to aerosol extinction profiles during the pre-monsoon month over South Asia

    DOE PAGES

    Feng, Y.; Kotamarthi, V. R.; Coulter, R.; ...

    2015-06-19

    Aerosol radiative effects and thermodynamic responses over South Asia are examined with a version of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) for March 2012. Model results of Aerosol Optical Depth (AOD) and extinction profiles are analyzed and compared to satellite retrievals and two ground-based lidars located in the northern India. The WRF-Chem model is found to underestimate the AOD during the simulated pre-monsoon month and about 83 % of the model low-bias is due to aerosol extinctions below ~2 km. Doubling the calculated aerosol extinctions below 850 hPa generates much better agreement with the observed AODmore » and extinction profiles averaged over South Asia. To separate the effect of absorption and scattering properties, two runs were conducted: in one run (Case I), the calculated scattering and absorption coefficients were increased proportionally, while in the second run (Case II) only the calculated aerosol scattering coefficient was increased. With the same AOD and extinction profiles, the two runs produce significantly different radiative effects over land and oceans. On the regional mean basis, Case I generates 48 % more heating in the atmosphere and 21 % more dimming at the surface than Case II. Case I also produces stronger cooling responses over the land from the longwave radiation adjustment and boundary layer mixing. These rapid adjustments offset the stronger radiative heating in Case I and lead to an overall lower-troposphere cooling up to -0.7 K day −1, which is smaller than that in Case II. Over the ocean, direct radiative effects dominate the heating rate changes in the lower atmosphere lacking such surface and lower atmosphere adjustments due to fixed sea surface temperature, and the strongest atmospheric warming is obtained in Case I. Consequently, atmospheric dynamics (boundary layer heights and meridional circulation) and thermodynamic processes (water vapor and cloudiness) are shown to respond differently between Case I and Case II underlying the importance of determining the exact portion of scattering or absorbing aerosols that lead to the underestimation of aerosol optical depth in the model. In addition, the model results suggest that both direct radiative effect and rapid thermodynamic responses need to be quantified for understanding aerosol radiative impacts.« less

  15. Accurate calculation of multispar cantilever and semicantilever wings with parallel webs under direct and indirect loading

    NASA Technical Reports Server (NTRS)

    Sanger, Eugen

    1932-01-01

    In the present report the computation is actually carried through for the case of parallel spars of equal resistance in bending without direct loading, including plotting of the influence lines; for other cases the method of calculation is explained. The development of large size airplanes can be speeded up by accurate methods of calculation such as this.

  16. Some computer graphical user interfaces in radiation therapy.

    PubMed

    Chow, James C L

    2016-03-28

    In this review, five graphical user interfaces (GUIs) used in radiation therapy practices and researches are introduced. They are: (1) the treatment time calculator, superficial X-ray treatment time calculator (SUPCALC) used in the superficial X-ray radiation therapy; (2) the monitor unit calculator, electron monitor unit calculator (EMUC) used in the electron radiation therapy; (3) the multileaf collimator machine file creator, sliding window intensity modulated radiotherapy (SWIMRT) used in generating fluence map for research and quality assurance in intensity modulated radiation therapy; (4) the treatment planning system, DOSCTP used in the calculation of 3D dose distribution using Monte Carlo simulation; and (5) the monitor unit calculator, photon beam monitor unit calculator (PMUC) used in photon beam radiation therapy. One common issue of these GUIs is that all user-friendly interfaces are linked to complex formulas and algorithms based on various theories, which do not have to be understood and noted by the user. In that case, user only needs to input the required information with help from graphical elements in order to produce desired results. SUPCALC is a superficial radiation treatment time calculator using the GUI technique to provide a convenient way for radiation therapist to calculate the treatment time, and keep a record for the skin cancer patient. EMUC is an electron monitor unit calculator for electron radiation therapy. Instead of doing hand calculation according to pre-determined dosimetric tables, clinical user needs only to input the required drawing of electron field in computer graphical file format, prescription dose, and beam parameters to EMUC to calculate the required monitor unit for the electron beam treatment. EMUC is based on a semi-experimental theory of sector-integration algorithm. SWIMRT is a multileaf collimator machine file creator to generate a fluence map produced by a medical linear accelerator. This machine file controls the multileaf collimator to deliver intensity modulated beams for a specific fluence map used in quality assurance or research. DOSCTP is a treatment planning system using the computed tomography images. Radiation beams (photon or electron) with different energies and field sizes produced by a linear accelerator can be placed in different positions to irradiate the tumour in the patient. DOSCTP is linked to a Monte Carlo simulation engine using the EGSnrc-based code, so that 3D dose distribution can be determined accurately for radiation therapy. Moreover, DOSCTP can be used for treatment planning of patient or small animal. PMUC is a GUI for calculation of the monitor unit based on the prescription dose of patient in photon beam radiation therapy. The calculation is based on dose corrections in changes of photon beam energy, treatment depth, field size, jaw position, beam axis, treatment distance and beam modifiers. All GUIs mentioned in this review were written either by the Microsoft Visual Basic.net or a MATLAB GUI development tool called GUIDE. In addition, all GUIs were verified and tested using measurements to ensure their accuracies were up to clinical acceptable levels for implementations.

  17. IDACstar: A MCNP Application to Perform Realistic Dose Estimations from Internal or External Contamination of Radiopharmaceuticals.

    PubMed

    Ören, Ünal; Hiller, Mauritius; Andersson, M

    2017-04-28

    A Monte Carlo-based stand-alone program, IDACstar (Internal Dose Assessment by Computer), was developed, dedicated to perform radiation dose calculations using complex voxel simulations. To test the program, two irradiation situations were simulated, one hypothetical contamination case with 600 MBq of 99mTc and one extravasation case involving 370 MBq of 18F-FDG. The effective dose was estimated to be 0.042 mSv for the contamination case and 4.5 mSv for the extravasation case. IDACstar has demonstrated that dosimetry results from contamination or extravasation cases can be acquired with great ease. An effective tool for radiation protection applications is provided with IDACstar allowing physicists at nuclear medicine departments to easily quantify the radiation risk of stochastic effects when a radiation accident has occurred. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Comprehensive evaluation of impacts of distributed generation integration in distribution network

    NASA Astrophysics Data System (ADS)

    Peng, Sujiang; Zhou, Erbiao; Ji, Fengkun; Cao, Xinhui; Liu, Lingshuang; Liu, Zifa; Wang, Xuyang; Cai, Xiaoyu

    2018-04-01

    All Distributed generation (DG) as the supplement to renewable energy centralized utilization, is becoming the focus of development direction of renewable energy utilization. With the increasing proportion of DG in distribution network, the network power structure, power flow distribution, operation plans and protection are affected to some extent. According to the main impacts of DG, a comprehensive evaluation model of distributed network with DG is proposed in this paper. A comprehensive evaluation index system including 7 aspects, along with their corresponding index calculation method is established for quantitative analysis. The indices under different access capacity of DG in distribution network are calculated based on the IEEE RBTS-Bus 6 system and the evaluation result is calculated by analytic hierarchy process (AHP). The proposed model and method are verified effective and validity through case study.

  19. A calculation model to half-life estimate of two-proton radioactive decay process

    NASA Astrophysics Data System (ADS)

    Tavares, O. A. P.; Medeiros, E. L.

    2018-04-01

    Partial half-life of the radioactive decay by the two-proton emission mode has been estimated for proton-rich nuclei of mass number 18 < A < 68 by a model based on the quantum mechanical tunneling mechanism through a potential barrier. The Coulomb, centrifugal and overlapping contributions to the barrier have been considered within the spherical nucleus approximation. The present calculation method has been shown to be adequate in reproducing the existing experimental half-life data for 19Mg, 45Fe, 48Ni, and 54Zn 2p-emitter nuclides within a factor six. For 67Kr parent nucleus the calculated partial 2p-decay half-life has been found to be ten times greater than the recent, unique measured value at RIKEN Nishina Center. Prediction for new, yet unmeasured cases of two-proton radioactivity are also reported.

  20. An empirically derived basis for calculating the area, rate, and distribution of water-drop impingement on airfoils

    NASA Technical Reports Server (NTRS)

    Bergrun, Norman R

    1952-01-01

    An empirically derived basis for predicting the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The concepts involved represent an initial step toward the development of a calculation technique which is generally applicable to the design of thermal ice-prevention equipment for airplane wing and tail surfaces. It is shown that sufficiently accurate estimates, for the purpose of heated-wing design, can be obtained by a few numerical computations once the velocity distribution over the airfoil has been determined. The calculation technique presented is based on results of extensive water-drop trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer.

  1. Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.

    PubMed

    Meyer, Veronika R

    2003-09-01

    Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.

  2. Managing simulation-based training: A framework for optimizing learning, cost, and time

    NASA Astrophysics Data System (ADS)

    Richmond, Noah Joseph

    This study provides a management framework for optimizing training programs for learning, cost, and time when using simulation based training (SBT) and reality based training (RBT) as resources. Simulation is shown to be an effective means for implementing activity substitution as a way to reduce risk. The risk profile of 22 US Air Force vehicles are calculated, and the potential risk reduction is calculated under the assumption of perfect substitutability of RBT and SBT. Methods are subsequently developed to relax the assumption of perfect substitutability. The transfer effectiveness ratio (TER) concept is defined and modeled as a function of the quality of the simulator used, and the requirements of the activity trained. The Navy F/A-18 is then analyzed in a case study illustrating how learning can be maximized subject to constraints in cost and time, and also subject to the decision maker's preferences for the proportional and absolute use of simulation. Solution methods for optimizing multiple activities across shared resources are next provided. Finally, a simulation strategy including an operations planning program (OPP), an implementation program (IP), an acquisition program (AP), and a pedagogical research program (PRP) is detailed. The study provides the theoretical tools to understand how to leverage SBT, a case study demonstrating these tools' efficacy, and a set of policy recommendations to enable the US military to better utilize SBT in the future.

  3. A geographic information system-based method for estimating cancer rates in non-census defined geographical areas.

    PubMed

    Freeman, Vincent L; Boylan, Emma E; Pugach, Oksana; Mclafferty, Sara L; Tossas-Milligan, Katherine Y; Watson, Karriem S; Winn, Robert A

    2017-10-01

    To address locally relevant cancer-related health issues, health departments frequently need data beyond that contained in standard census area-based statistics. We describe a geographic information system-based method for calculating age-standardized cancer incidence rates in non-census defined geographical areas using publically available data. Aggregated records of cancer cases diagnosed from 2009 through 2013 in each of Chicago's 77 census-defined community areas were obtained from the Illinois State Cancer Registry. Areal interpolation through dasymetric mapping of census blocks was used to redistribute populations and case counts from community areas to Chicago's 50 politically defined aldermanic wards, and ward-level age-standardized 5-year cumulative incidence rates were calculated. Potential errors in redistributing populations between geographies were limited to <1.5% of the total population, and agreement between our ward population estimates and those from a frequently cited reference set of estimates was high (Pearson correlation r = 0.99, mean difference = -4 persons). A map overlay of safety-net primary care clinic locations and ward-level incidence rates for advanced-staged cancers revealed potential pathways for prevention. Areal interpolation through dasymetric mapping can estimate cancer rates in non-census defined geographies. This can address gaps in local cancer-related health data, inform health resource advocacy, and guide community-centered cancer prevention and control.

  4. Why convective heat transport in the solar nebula was inefficient

    NASA Technical Reports Server (NTRS)

    Cassen, P.

    1993-01-01

    The radial distributions of the effective temperatures of circumstellar disks associated with pre-main sequence (T Tauri) stars are relatively well-constrained by ground-based and spacecraft infrared photometry and radio continuum observations. If the mechanisms by which energy is transported vertically in the disks are understood, these data can be used to constrain models of the thermal structure and evolution of solar nebula. Several studies of the evolution of the solar nebula have included the calculation of the vertical transport of heat by convection. Such calculations rely on a mixing length theory of transport and some assumption regarding the vertical distribution of internal dissipation. In all cases, the results of these calculations indicate that transport by radiation dominates that by convection, even when the nebula is convectively unstable. A simple argument that demonstrates the generality (and limits) of this result, regardless of the details of mixing length theory or the precise distribution of internal heating is presented. It is based on the idea that the radiative gradient in an optically thick nebula generally does not greatly exceed the adiabatic gradient.

  5. Approximate quasiparticle correction for calculations of the energy gap in two-dimensional materials

    NASA Astrophysics Data System (ADS)

    Guilhon, I.; Koda, D. S.; Ferreira, L. G.; Marques, M.; Teles, L. K.

    2018-01-01

    At the same time that two-dimensional (2D) systems open possibilities for new physics and applications, they present a higher challenge for electronic structure calculations, especially concerning excitations. The achievement of a fast and accurate practical model that incorporates approximate quasiparticle corrections can further open an avenue for more reliable band structure calculations of complex systems such as interactions of 2D materials with substrates or molecules, as well as the formation of van der Waals heterostructures. In this work, we demonstrate that the performance of the fast and parameter-free DFT-1/2 method is comparable with state-of-the-art GW and superior to the HSE06 hybrid functional in the majority set of the 34 different 2D materials studied. Moreover, based on the knowledge of the method and chemical information of the material, we can predict the small number of cases in which the method is not so effective and also provide the best recipe for an optimized DFT-1/2 method based on the electronegativity difference of the bonding atoms.

  6. Electronic structure of boron based single and multi-layer two dimensional materials

    NASA Astrophysics Data System (ADS)

    Miyazato, Itsuki; Takahashi, Keisuke

    2017-09-01

    Two dimensional nanosheets based on boron and Group VA elements are designed and characterized using first principles calculations. B-N, B-P, B-As, B-Sb, and B-Bi are found to possess honeycomb structures where formation energies indicate exothermic reactions. Contrary to B-N, the cases of B-P, B-As, B-Sb, and B-Bi nanosheets are calculated to possess narrow band gaps. In addition, calculations reveal that the electronegativity difference between B and Group VA elements in the designed materials is a good indicator to predict the charge transfer and band gap of the two dimensional materials. Hydrogen adsorption over defect-free B-Sb and B-Bi results in exothermic reactions, while defect-free B-N, B-P, and B-As result in endothermic reactions. The layerability of the designed two dimensional materials is also investigated where the electronic structure of two-layered two dimensional materials is strongly coupled with how the two dimensional materials are layered. Thus, one can consider that the properties of two dimensional materials can be controlled by the composition of two dimensional materials and the structure of layers.

  7. Derivation of effective fission gas diffusivities in UO2 from lower length scale simulations and implementation of fission gas diffusion models in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, Anders David Ragnar; Pastore, Giovanni; Liu, Xiang-Yang

    2014-11-07

    This report summarizes the development of new fission gas diffusion models from lower length scale simulations and assessment of these models in terms of annealing experiments and fission gas release simulations using the BISON fuel performance code. Based on the mechanisms established from density functional theory (DFT) and empirical potential calculations, continuum models for diffusion of xenon (Xe) in UO 2 were derived for both intrinsic conditions and under irradiation. The importance of the large X eU3O cluster (a Xe atom in a uranium + oxygen vacancy trap site with two bound uranium vacancies) is emphasized, which is a consequencemore » of its high mobility and stability. These models were implemented in the MARMOT phase field code, which is used to calculate effective Xe diffusivities for various irradiation conditions. The effective diffusivities were used in BISON to calculate fission gas release for a number of test cases. The results are assessed against experimental data and future directions for research are outlined based on the conclusions.« less

  8. The improvement of the method of equivalent cross section in HTR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, J.; Li, F.

    The Method of Equivalence Cross-Sections (MECS) is a combined transport-diffusion method. By appropriately adjusting the diffusion coefficient of homogenized absorber region, the diffusion theory could yield satisfactory results for the full core model with strong neutron absorber material, for example the control rod in High temperature gas cooled reactor (HTR). Original implementation of MECS based on 1-D cell transport model has some limitation on accuracy and applicability, a new implementation of MECS based on 2-D transport model are proposed and tested in this paper. This improvement can extend the MECS to the calculation of twin small absorber ball system whichmore » have a non-circular boring in graphite reflector and different radial position. A least-square algorithm for the calculation of equivalent diffusion coefficient is adopted, and special treatment for diffusion coefficient for higher energy group is proposed in the case that absorber is absent. Numerical results to adopt MECS into control rod calculation in HTR are encouraging. However, there are some problems left. (authors)« less

  9. Estimating population size in wastewater-based epidemiology. Valencia metropolitan area as a case study.

    PubMed

    Rico, María; Andrés-Costa, María Jesús; Picó, Yolanda

    2017-02-05

    Wastewater can provide a wealth of epidemiologic data on common drugs consumed and on health and nutritional problems based on the biomarkers excreted into community sewage systems. One of the biggest uncertainties of these studies is the estimation of the number of inhabitants served by the treatment plants. Twelve human urine biomarkers -5-hydroxyindoleacetic acid (5-HIAA), acesulfame, atenolol, caffeine, carbamazepine, codeine, cotinine, creatinine, hydrochlorothiazide (HCTZ), naproxen, salicylic acid (SA) and hydroxycotinine (OHCOT)- were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to estimate population size. The results reveal that populations calculated from cotinine, 5-HIAA and caffeine are commonly in agreement with those calculated by the hydrochemical parameters. Creatinine is too unstable to be applicable. HCTZ, naproxen, codeine, OHCOT and carbamazepine, under or overestimate the population compared to the hydrochemical population estimates but showed constant results through the weekdays. The consumption of cannabis, cocaine, heroin and bufotenine in Valencia was estimated for a week using different population calculations. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballester, Facundo, E-mail: Facundo.Ballester@uv.es; Carlsson Tedgren, Åsa; Granero, Domingo

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual watermore » phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then compared against dose calculated using TG-43 and MBDCA methods. Results: TG-43 and PSS datasets were generated for the generic source, the PSS data for use with the ACE algorithm. The dose-rate constant values obtained from seven MC simulations, performed independently using different codes, were in excellent agreement, yielding an average of 1.1109 ± 0.0004 cGy/(h U) (k = 1, Type A uncertainty). MC calculated dose-rate distributions for the two plans were also found to be in excellent agreement, with differences within type A uncertainties. Differences between commercial MBDCA and MC results were test, position, and calculation parameter dependent. On average, however, these differences were within 1% for ACUROS and 2% for ACE at clinically relevant distances. Conclusions: A hypothetical, generic HDR {sup 192}Ir source was designed and implemented in two commercially available TPSs employing different MBDCAs. Reference dose distributions for this source were benchmarked and used for the evaluation of MBDCA calculations employing a virtual, cubic water phantom in the form of a CT DICOM image series. The implementation of a generic source of identical design in all TPSs using MBDCAs is an important step toward supporting univocal commissioning procedures and direct comparisons between TPSs.« less

  11. Origin of Unusual Dependencies of LUMO Levels on Conjugation Length in Quinoidal Fused Oligosiloles

    NASA Astrophysics Data System (ADS)

    Misawa, Nana; Fujii, Mikiya; Shintani, Ryo; Tsuda, Tomohiro; Nozaki, Kyoko; Yamashita, Koichi

    Quinoidal fused oligosiloles, a new family of silicon-bridged π-conjugated compounds, have been synthesized and their physical properties showed a unique trend in their LUMO levels, which become higher with longer π-conjugation. Although this trend was reproduced by the DFT calculations, its origin remained to be discussed. In this work we performed quantum chemical calculations and discovered that the unusual LUMO trend is attributable to the π-frameworks. We elucidated its origin by orbital correlation diagrams based on classical Hückel calculations, essentially. However, LUMO trends cannot fully be explained only by Hückel calculations because of the lack of the consideration of geometries. In the case of quinoidal fused oligosiloles, judging from DFT calculation results, the presence of silole fused structure play an important role in fixing the bond angles of the linear polyenes as an interior angle of siloles, leading to the unusual LUMO behavior. The qualitative but essential understanding of these LUMO trend would provide new insight into molecular design of π-conjugated compounds for tuning their LUMO levels.

  12. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.

    2016-05-01

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, Sara L., E-mail: slloupot@mdanderson.org; Shaitelman, Simona F.; Bloom, Elizabeth

    Purpose: To compare the treatment plans for accelerated partial breast irradiation calculated by the new commercially available collapsed cone convolution (CCC) and current standard TG-43–based algorithms for 50 patients treated at our institution with either a Strut-Adjusted Volume Implant (SAVI) or Contura device. Methods and Materials: We recalculated target coverage, volume of highly dosed normal tissue, and dose to organs at risk (ribs, skin, and lung) with each algorithm. For 1 case an artificial air pocket was added to simulate 10% nonconformance. We performed a Wilcoxon signed rank test to determine the median differences in the clinical indices V90, V95, V100,more » V150, V200, and highest-dosed 0.1 cm{sup 3} and 1.0 cm{sup 3} of rib, skin, and lung between the two algorithms. Results: The CCC algorithm calculated lower values on average for all dose-volume histogram parameters. Across the entire patient cohort, the median difference in the clinical indices calculated by the 2 algorithms was <10% for dose to organs at risk, <5% for target volume coverage (V90, V95, and V100), and <4 cm{sup 3} for dose to normal breast tissue (V150 and V200). No discernable difference was seen in the nonconformance case. Conclusions: We found that on average over our patient population CCC calculated (<10%) lower doses than TG-43. These results should inform clinicians as they prepare for the transition to heterogeneous dose calculation algorithms and determine whether clinical tolerance limits warrant modification.« less

  14. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    PubMed

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-07

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0.69-1.23 times for photon only transport.

  15. On the systematic approach to the classification of differential equations by group theoretical methods

    NASA Astrophysics Data System (ADS)

    Andriopoulos, K.; Dimas, S.; Leach, P. G. L.; Tsoubelis, D.

    2009-08-01

    Complete symmetry groups enable one to characterise fully a given differential equation. By considering the reversal of an approach based upon complete symmetry groups we construct new classes of differential equations which have the equations of Bateman, Monge-Ampère and Born-Infeld as special cases. We develop a symbolic algorithm to decrease the complexity of the calculations involved.

  16. Full distortion induced by dispersion evaluation and optical bandwidth constraining of fiber Bragg grating demultiplexers over analogue SCM systems.

    PubMed

    Martinez, Alfonso; Pastor, Daniel; Capmany, Jose

    2002-12-30

    We provide a full analysis of the distortion effects produced by the first and second order in-band dispersion of fiber Bragg grating based optical demultiplexers over analogue SCM (Sub Carrier Multiplexed) signals. Optical bandwidth utilization ranges for Dense WDM network are calculated considering different SCM system cases of frequency extension and modulation conditions.

  17. Thermometric titration of acids in pyridine.

    PubMed

    Vidal, R; Mukherjee, L M

    1974-04-01

    Thermometric titration of HClO(4), HI, HNO(3), HBr, picric acid o-nitrobenzoic acid, 2,4- and 2,5-dinitrophenol, acetic acid and benzoic acid have been attempted in pyridine as solvent, using 1,3-diphenylguanidine as the base. Except in the case of 2,5-dinitrophenol, acetic acid and benzoic acid, the results are, in general, reasonably satisfactory. The approximate molar heats of neutralization have been calculated.

  18. 17 CFR 270.30b1-6T - Weekly portfolio report for certain money market funds.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...; (I) The amortized cost value; and (J) In the case of a tax-exempt security, whether there is a demand... the fund's stable net asset value per share or stable price per share pursuant to § 270.2a-7(c)(1...) Market-based NAV means a money market fund's net asset value per share calculated using available market...

  19. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  20. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Bai, T

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less

  1. Validation of the Oncentra Brachy Advanced Collapsed cone Engine for a commercial (192)Ir source using heterogeneous geometries.

    PubMed

    Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc

    2015-01-01

    To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  2. Poster — Thur Eve — 33: The Influence of a Modeled Treatment Couch on Dose Distributions During IMRT and RapidArc Treatment Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldosary, Ghada; Nobah, Ahmad; Al-Zorkani, Faisal

    2014-08-15

    Treatment couches have been known to perturb dose delivery in patients. This effect is most pronounced in techniques such as IMRT and RapidArc. Although modern treatment planning systems (TPS) include data for a “default” treatment couch, actual couches are not manufactured identically. Thus, variations in their Hounsfield Unit (HU) values may exist. This study demonstrates a practical and simple method of acquiring reliable HU data for any treatment couch. We also investigate the effects of both the default and modeled treatment couches on absorbed dose. Experimental verifications show that by neglecting to incorporate the treatment couch in the TPS, dosemore » differences of up to 9.5% and 7.3% were present for 4 MV and 10 MV photon beams, respectively. Furthermore, a clinical study based on a cohort of 20 RapidArc and IMRT (brain, pelvis and abdominal) cases is performed. 2D dose distributions show that without the couch in the planning phase, differences ≤ 4.6% and 5.9% for RapidArc and IMRT cases are present for the same cases that the default couch was added to. Additionally, in comparison to the default couch, employing the modeled couch in the calculation process influences dose distributions by ≤ 2.7% and 8% for RapidArc and IMRT cases, respectively. This result was found to be site specific; where an accurate couch proves to be preferable for IMRT brain plans. As such, adding the couch during dose calculation decreases dose calculation errors, and a precisely modeled treatment couch offers higher dose delivery accuracy for brain treatment using IMRT.« less

  3. The precautionary principle within European Union public health policy. The implementation of the principle under conditions of supranationality and citizenship.

    PubMed

    Antonopoulou, Lila; van Meurs, Philip

    2003-11-01

    The present study examines the precautionary principle within the parameters of public health policy in the European Union, regarding both its meaning, as it has been shaped by relevant EU institutions and their counterparts within the Member States, and its implementation in practice. In the initial section I concentrate on the methodological question of "scientific uncertainty" concerning the calculation of risk and possible damage. Calculation of risk in many cases justifies the adopting of preventive measures, but, as it is argued, the principle of precaution and its implementation cannot be wholly captured by a logic of calculation; such a principle does not only contain scientific uncertainty-as the preventive principle does-but it itself is generated as a principle by this scientific uncertainty, recognising the need for a society to act. Thus, the implementation of the precautionary principle is also a simultaneous search for justification of its status as a principle. This justification would result in the adoption of precautionary measures against risk although no proof of this principle has been produced based on the "cause-effect" model. The main part of the study is occupied with an examination of three cases from which the stance of the official bodies of the European Union towards the precautionary principle and its implementation emerges: the case of the "mad cows" disease, the case of production and commercialization of genetically modified foodstuffs. The study concludes with the assessment that the effective implementation of the precautionary principle on a European level depends on the emergence of a concerned Europe-wide citizenship and its acting as a mechanism to counteract the material and social conditions that pose risks for human health.

  4. Using Time-Driven Activity-Based Costing as a Key Component of the Value Platform: A Pilot Analysis of Colonoscopy, Aortic Valve Replacement and Carpal Tunnel Release Procedures.

    PubMed

    Martin, Jacob A; Mayhew, Christopher R; Morris, Amanda J; Bader, Angela M; Tsai, Mitchell H; Urman, Richard D

    2018-04-01

    Time-driven activity-based costing (TDABC) is a methodology that calculates the costs of healthcare resources consumed as a patient moves along a care process. Limited data exist on the application of TDABC from the perspective of an anesthesia provider. We describe the use of TDABC, a bottom-up costing strategy and financial outcomes for three different medical-surgical procedures. In each case, a multi-disciplinary team created process maps describing the care delivery cycle for a patient encounter using the TDABC methodology. Each step in a process map delineated an activity required for delivery of patient care. The resources (personnel, equipment and supplies) associated with each step were identified. A per minute cost for each resource expended was generated, known as the capacity cost rate, and multiplied by its time requirement. The total cost for an episode of care was obtained by adding the cost of each individual resource consumed as the patient moved along a clinical pathway. We built process maps for colonoscopy in the gastroenterology suite, calculated costs of an aortic valve replacement by comparing surgical aortic valve replacement (SAVR) versus transcatheter aortic valve replacement (TAVR) techniques, and determined the cost of carpal tunnel release in an operating room versus an ambulatory procedure room. TDABC is central to the value-based healthcare platform. Application of TDABC provides a framework to identify process improvements for health care delivery. The first case demonstrates cost-savings and improved wait times by shifting some of the colonoscopies scheduled with an anesthesiologist from the main hospital to the ambulatory facility. In the second case, we show that the deployment of an aortic valve via the transcatheter route front loads the costs compared to traditional, surgical replacement. The last case demonstrates significant cost savings to the healthcare system associated with re-organization of staff required to execute a carpal tunnel release.

  5. Using Time-Driven Activity-Based Costing as a Key Component of the Value Platform: A Pilot Analysis of Colonoscopy, Aortic Valve Replacement and Carpal Tunnel Release Procedures

    PubMed Central

    Martin, Jacob A.; Mayhew, Christopher R.; Morris, Amanda J.; Bader, Angela M.; Tsai, Mitchell H.; Urman, Richard D.

    2018-01-01

    Background Time-driven activity-based costing (TDABC) is a methodology that calculates the costs of healthcare resources consumed as a patient moves along a care process. Limited data exist on the application of TDABC from the perspective of an anesthesia provider. We describe the use of TDABC, a bottom-up costing strategy and financial outcomes for three different medical-surgical procedures. Methods In each case, a multi-disciplinary team created process maps describing the care delivery cycle for a patient encounter using the TDABC methodology. Each step in a process map delineated an activity required for delivery of patient care. The resources (personnel, equipment and supplies) associated with each step were identified. A per minute cost for each resource expended was generated, known as the capacity cost rate, and multiplied by its time requirement. The total cost for an episode of care was obtained by adding the cost of each individual resource consumed as the patient moved along a clinical pathway. Results We built process maps for colonoscopy in the gastroenterology suite, calculated costs of an aortic valve replacement by comparing surgical aortic valve replacement (SAVR) versus transcatheter aortic valve replacement (TAVR) techniques, and determined the cost of carpal tunnel release in an operating room versus an ambulatory procedure room. Conclusions TDABC is central to the value-based healthcare platform. Application of TDABC provides a framework to identify process improvements for health care delivery. The first case demonstrates cost-savings and improved wait times by shifting some of the colonoscopies scheduled with an anesthesiologist from the main hospital to the ambulatory facility. In the second case, we show that the deployment of an aortic valve via the transcatheter route front loads the costs compared to traditional, surgical replacement. The last case demonstrates significant cost savings to the healthcare system associated with re-organization of staff required to execute a carpal tunnel release. PMID:29511420

  6. Additional nuclear criticality safety calculations for small-diameter containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hone, M.J.

    This report documents additional criticality safety analysis calculations for small diameter containers, which were originally documented in Reference 1. The results in Reference 1 indicated that some of the small diameter containers did not meet the criteria established for criticality safety at the Portsmouth facility (K{sub eff} +2{sigma}<.95) when modeled under various contingency assumptions of reflection and moderation. The calculations performed in this report reexamine those cases which did not meet the criticality safety criteria. In some cases, unnecessary conservatism is removed, and in other cases mass or assay limits are established for use with the respective containers.

  7. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  8. Independent calculation of monitor units for VMAT and SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xin; Bush, Karl; Ding, Aiping

    Purpose: Dose and monitor units (MUs) represent two important facets of a radiation therapy treatment. In current practice, verification of a treatment plan is commonly done in dose domain, in which a phantom measurement or forward dose calculation is performed to examine the dosimetric accuracy and the MU settings of a given treatment plan. While it is desirable to verify directly the MU settings, a computational framework for obtaining the MU values from a known dose distribution has yet to be developed. This work presents a strategy to calculate independently the MUs from a given dose distribution of volumetric modulatedmore » arc therapy (VMAT) and station parameter optimized radiation therapy (SPORT). Methods: The dose at a point can be expressed as a sum of contributions from all the station points (or control points). This relationship forms the basis of the proposed MU verification technique. To proceed, the authors first obtain the matrix elements which characterize the dosimetric contribution of the involved station points by computing the doses at a series of voxels, typically on the prescription surface of the VMAT/SPORT treatment plan, with unit MU setting for all the station points. An in-house Monte Carlo (MC) software is used for the dose matrix calculation. The MUs of the station points are then derived by minimizing the least-squares difference between doses computed by the treatment planning system (TPS) and that of the MC for the selected set of voxels on the prescription surface. The technique is applied to 16 clinical cases with a variety of energies, disease sites, and TPS dose calculation algorithms. Results: For all plans except the lung cases with large tissue density inhomogeneity, the independently computed MUs agree with that of TPS to within 2.7% for all the station points. In the dose domain, no significant difference between the MC and Eclipse Anisotropic Analytical Algorithm (AAA) dose distribution is found in terms of isodose contours, dose profiles, gamma index, and dose volume histogram (DVH) for these cases. For the lung cases, the MC-calculated MUs differ significantly from that of the treatment plan computed using AAA. However, the discrepancies are reduced to within 3% when the TPS dose calculation algorithm is switched to a transport equation-based technique (Acuros™). Comparison in the dose domain between the MC and Eclipse AAA/Acuros calculation yields conclusion consistent with the MU calculation. Conclusions: A computational framework relating the MU and dose domains has been established. The framework does not only enable them to verify the MU values of the involved station points of a VMAT plan directly in the MU domain but also provide a much needed mechanism to adaptively modify the MU values of the station points in accordance to a specific change in the dose domain.« less

  9. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  10. Setting Occupational Exposure Limits for Genotoxic Substances in the Pharmaceutical Industry.

    PubMed

    Lovsin Barle, Ester; Winkler, Gian Christian; Glowienke, Susanne; Elhajouji, Azeddine; Nunic, Jana; Martus, Hans-Joerg

    2016-05-01

    In the pharmaceutical industry, genotoxic drug substances are developed for life-threatening indications such as cancer. Healthy employees handle these substances during research, development, and manufacturing; therefore, safe handling of genotoxic substances is essential. When an adequate preclinical dataset is available, a risk-based decision related to exposure controls for manufacturing is made following a determination of safe health-based limits, such as an occupational exposure limit (OEL). OELs are calculated for substances based on a threshold dose-response once a threshold is identified. In this review, we present examples of genotoxic mechanisms where thresholds can be demonstrated and OELs can be calculated, including a holistic toxicity assessment. We also propose a novel approach for inhalation Threshold of Toxicological Concern (TTC) limit for genotoxic substances in cases where the database is not adequate to determine a threshold. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  12. Traction-free vibrations of finite trigonal elastic cylinders.

    PubMed

    Heyliger, Paul R; Johnson, Ward L

    2003-04-01

    The unrestrained, traction-free vibrations of finite elastic cylinders with trigonal material symmetry are studied using two approaches, based on the Ritz method, which formulate the weak form of the equations of motion in cylindrical and rectangular coordinates. Elements of group theory are used to divide approximation functions into orthogonal subsets, thus reducing the size of the computational problem and classifying the general symmetries of the vibrational modes. Results for the special case of an isotropic cylinder are presented and compared with values published by other researchers. For the isotropic case, the relative accuracy of the formulations in cylindrical and rectangular coordinates can be evaluated, because exact analytical solutions are known for the torsional modes. The calculation in cylindrical coordinates is found to be more accurate for a given number of terms in the series approximation functions. For a representative trigonal material, langatate, calculations of the resonant frequencies and the sensitivity of the frequencies on each of the elastic constants are presented. The dependence on geometry (ratio of length to diameter) is briefly explored. The special case of a transversely isotropic cylinder (with the elastic stiffness C14 equal to zero) is also considered.

  13. Burden of suicide in Poland in 2012: how could it be measured and how big is it?

    PubMed

    Orlewska, Katarzyna; Orlewska, Ewa

    2018-04-01

    The aim of our study was to estimate the health-related and economic burden of suicide in Poland in 2012 and to demonstrate the effects of using different assumptions on the disease burden estimation. Years of life lost (YLL) were calculated by multiplying the number of deaths by the remaining life expectancy. Local expected YLL (LEYLL) and standard expected YLL (SEYLL) were computed using Polish life expectancy tables and WHO standards, respectively. In the base case analysis LEYLL and SEYLL were computed with 3.5 and 0% discount rates, respectively, and no age-weighting. Premature mortality costs were calculated using a human capital approach, with discounting at 5%, and are reported in Polish zloty (PLN) (1 euro = 4.3 PLN). The impact of applying different assumptions on base-case estimates was tested in sensitivity analyses. The total LEYLLs and SEYLLs due to suicide were 109,338 and 279,425, respectively, with 88% attributable to male deaths. The cost of male premature mortality (2,808,854,532 PLN) was substantially higher than for females (177,852,804 PLN). Discounting and age-weighting have a large effect on the base case estimates of LEYLLs. The greatest impact on the estimates of suicide-related premature mortality costs was due to the value of the discount rate. Our findings provide quantitative evidence on the burden of suicide. In our opinion each of the demonstrated methods brings something valuable to the evaluation of the impact of suicide on a given population, but LEYLLs and premature mortality costs estimated according to national guidelines have the potential to be useful for local public health policymakers.

  14. Costs and cost-effectiveness of training traditional birth attendants to reduce neonatal mortality in the Lufwanyama Neonatal Survival study (LUNESP).

    PubMed

    Sabin, Lora L; Knapp, Anna B; MacLeod, William B; Phiri-Mazala, Grace; Kasimba, Joshua; Hamer, Davidson H; Gill, Christopher J

    2012-01-01

    The Lufwanyama Neonatal Survival Project ("LUNESP") was a cluster randomized, controlled trial that showed that training traditional birth attendants (TBAs) to perform interventions targeting birth asphyxia, hypothermia, and neonatal sepsis reduced all-cause neonatal mortality by 45%. This companion analysis was undertaken to analyze intervention costs and cost-effectiveness, and factors that might improve cost-effectiveness. We calculated LUNESP's financial and economic costs and the economic cost of implementation for a forecasted ten-year program (2011-2020). In each case, we calculated the incremental cost per death avoided and disability-adjusted life years (DALYs) averted in real 2011 US dollars. The forecasted 10-year program analysis included a base case as well as 'conservative' and 'optimistic' scenarios. Uncertainty was characterized using one-way sensitivity analyses and a multivariate probabilistic sensitivity analysis. The estimated financial and economic costs of LUNESP were $118,574 and $127,756, respectively, or $49,469 and $53,550 per year. Fixed costs accounted for nearly 90% of total costs. For the 10-year program, discounted total and annual program costs were $256,455 and $26,834 respectively; for the base case, optimistic, and conservative scenarios, the estimated cost per death avoided was $1,866, $591, and $3,024, and cost per DALY averted was $74, $24, and $120, respectively. Outcomes were robust to variations in local costs, but sensitive to variations in intervention effect size, number of births attended by TBAs, and the extent of foreign consultants' participation. Based on established guidelines, the strategy of using trained TBAs to reduce neonatal mortality was 'highly cost effective'. We strongly recommend consideration of this approach for other remote rural populations with limited access to health care.

  15. Costs and Cost-Effectiveness of Training Traditional Birth Attendants to Reduce Neonatal Mortality in the Lufwanyama Neonatal Survival Study (LUNESP)

    PubMed Central

    Sabin, Lora L.; Knapp, Anna B.; MacLeod, William B.; Phiri-Mazala, Grace; Kasimba, Joshua; Hamer, Davidson H.; Gill, Christopher J.

    2012-01-01

    Background The Lufwanyama Neonatal Survival Project (“LUNESP”) was a cluster randomized, controlled trial that showed that training traditional birth attendants (TBAs) to perform interventions targeting birth asphyxia, hypothermia, and neonatal sepsis reduced all-cause neonatal mortality by 45%. This companion analysis was undertaken to analyze intervention costs and cost-effectiveness, and factors that might improve cost-effectiveness. Methods and Findings We calculated LUNESP's financial and economic costs and the economic cost of implementation for a forecasted ten-year program (2011–2020). In each case, we calculated the incremental cost per death avoided and disability-adjusted life years (DALYs) averted in real 2011 US dollars. The forecasted 10-year program analysis included a base case as well as ‘conservative’ and ‘optimistic’ scenarios. Uncertainty was characterized using one-way sensitivity analyses and a multivariate probabilistic sensitivity analysis. The estimated financial and economic costs of LUNESP were $118,574 and $127,756, respectively, or $49,469 and $53,550 per year. Fixed costs accounted for nearly 90% of total costs. For the 10-year program, discounted total and annual program costs were $256,455 and $26,834 respectively; for the base case, optimistic, and conservative scenarios, the estimated cost per death avoided was $1,866, $591, and $3,024, and cost per DALY averted was $74, $24, and $120, respectively. Outcomes were robust to variations in local costs, but sensitive to variations in intervention effect size, number of births attended by TBAs, and the extent of foreign consultants' participation. Conclusions Based on established guidelines, the strategy of using trained TBAs to reduce neonatal mortality was ‘highly cost effective’. We strongly recommend consideration of this approach for other remote rural populations with limited access to health care. PMID:22545117

  16. Energy modulated electron therapy using a few leaf electron collimator in combination with IMRT and 3D-CRT: Monte Carlo-based planning and dosimetric evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Yahya, Khalid; Schwartz, Matthew; Shenouda, George

    2005-09-15

    Energy modulated electron therapy (EMET) based on Monte Carlo dose calculation is a promising technique that enhances the treatment planning and delivery of superficially located tumors. This study investigated the application of EMET using a novel few-leaf electron collimator (FLEC) in head and neck and breast sites in comparison with three-dimensional conventional radiation therapy (3D-CRT) and intensity modulated radiation therapy (IMRT) techniques. Treatment planning was performed for two parotid cases and one breast case. Four plans were compared for each case: 3D-CRT, IMRT, 3D-CRT in conjunction with EMET (EMET-CRT), and IMRT in conjunction with EMET (EMET-IMRT), all of which weremore » performed and calculated with Monte Carlo techniques. For all patients, dose volume histograms (DVHs) were obtained for all organs of interest and the DVHs were used as a means of comparing the plans. Homogeneity and conformity of dose distributions were calculated, as well as a sparing index that compares the effect of the low isodose lines. In addition, the whole-body dose equivalent (WBDE) was estimated for each plan. Adding EMET delivered with the FLEC to 3D-CRT improves sparing of normal tissues. For the two head and neck cases, the mean dose to the contralateral parotid and brain stem was reduced relative to IMRT by 43% and 84%, and by 57% and 71%, respectively. Improved normal tissue sparing was quantified as an increase in sparing index of 47% and 30% for the head and neck and the breast cases, respectively. Adding EMET to either 3D-CRT or IMRT results in preservation of target conformity and dose homogeneity. When adding EMET to the treatment plan, the WBDE was reduced by between 6% and 19% for 3D-CRT and by between 21% and 33% for IMRT, while WBDE for EMET-CRT was reduced by up to 72% when compared with IMRT. FLEC offers a practical means of delivering modulated electron therapy. Although adding EMET delivered using the FLEC results in perturbation of target conformity when compared to IMRT, it significantly improves normal tissue sparing while offering enhanced target conformity to the 3D-CRT planning. The addition of EMET systematically leads to a reduction in WBDE especially when compared with IMRT.« less

  17. Accidents on board merchant ships. Suggestions based on Centro Internazionale Radio Medico (CIRM) experience.

    PubMed

    Napoleone, Paolo

    2016-01-01

    This statistical study was performed to find out the occurrence of accidents on board ships assisted by Centro Internazionale Radio Medico (CIRM) during the years 2010-2015, with the aim of providing suggestions in accident prevention, based on such a wide experience. The case histories of CIRM in the years 2010-2015 were examined. The total number of accidents per year was calculated and compared as a percentage with the total number of cases assisted by CIRM per year. The incidence of accidents on board in these years ranged between 14.4% and 18.4% of total cases assisted per year, which is constantly increasing. The most common injuries on board among cases treated by CIRM were contusions and wounds. Also burns and eye injuries were significantly represented. Multiple injuries and head injuries were found to be the most frequent cause of death on board due to an accident. More information on the occurrence and type of accidents and on the body injured areas should represent the basis for developing strategies and campaigns for their prevention.

  18. Stereochemical analysis of (+)-limonene using theoretical and experimental NMR and chiroptical data

    NASA Astrophysics Data System (ADS)

    Reinscheid, F.; Reinscheid, U. M.

    2016-02-01

    Using limonene as test molecule, the success and the limitations of three chiroptical methods (optical rotatory dispersion (ORD), electronic and vibrational circular dichroism, ECD and VCD) could be demonstrated. At quite low levels of theory (mpw1pw91/cc-pvdz, IEFPCM (integral equation formalism polarizable continuum model)) the experimental ORD values differ by less than 10 units from the calculated values. The modelling in the condensed phase still represents a challenge so that experimental NMR data were used to test for aggregation and solvent-solute interactions. After establishing a reasonable structural model, only the ECD spectra prediction showed a decisive dependence on the basis set: only augmented (in the case of Dunning's basis sets) or diffuse (in the case of Pople's basis sets) basis sets predicted the position and shape of the ECD bands correctly. Based on these result we propose a procedure to assign the absolute configuration (AC) of an unknown compound using the comparison between experimental and calculated chiroptical data.

  19. Imprecise (fuzzy) information in geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardossy, A.; Bogardi, I.; Kelly, W.E.

    1988-05-01

    A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in amore » fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.« less

  20. WE-F-201-00: Practical Guidelines for Commissioning Advanced Brachytherapy Dose Calculation Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    With the recent introduction of heterogeneity correction algorithms for brachytherapy, the AAPM community is still unclear on how to commission and implement these into clinical practice. The recently-published AAPM TG-186 report discusses important issues for clinical implementation of these algorithms. A charge of the AAPM-ESTRO-ABG Working Group on MBDCA in Brachytherapy (WGMBDCA) is the development of a set of well-defined test case plans, available as references in the software commissioning process to be performed by clinical end-users. In this practical medical physics course, specific examples on how to perform the commissioning process are presented, as well as descriptions of themore » clinical impact from recent literature reporting comparisons of TG-43 and heterogeneity-based dosimetry. Learning Objectives: Identify key clinical applications needing advanced dose calculation in brachytherapy. Review TG-186 and WGMBDCA guidelines, commission process, and dosimetry benchmarks. Evaluate clinical cases using commercially available systems and compare to TG-43 dosimetry.« less

Top