Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong
2007-04-01
Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.
Radioactive waste disposal fees-Methodology for calculation
NASA Astrophysics Data System (ADS)
Bemš, Július; Králík, Tomáš; Kubančák, Ján; Vašíček, Jiří; Starý, Oldřich
2014-11-01
This paper summarizes the methodological approach used for calculation of fee for low- and intermediate-level radioactive waste disposal and for spent fuel disposal. The methodology itself is based on simulation of cash flows related to the operation of system for waste disposal. The paper includes demonstration of methodology application on the conditions of the Czech Republic.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-04
... NMAC) addition of in subsections methodology for (A) and (B). fugitive dust control permits, revised... fee Fee Calculations and requirements for Procedures. fugitive dust control permits. 9/7/2004 Section... schedule based on acreage, add and update calculation methodology used to calculate non- programmatic dust...
An AIS-based approach to calculate atmospheric emissions from the UK fishing fleet
NASA Astrophysics Data System (ADS)
Coello, Jonathan; Williams, Ian; Hudson, Dominic A.; Kemp, Simon
2015-08-01
The fishing industry is heavily reliant on the use of fossil fuel and emits large quantities of greenhouse gases and other atmospheric pollutants. Methods used to calculate fishing vessel emissions inventories have traditionally utilised estimates of fuel efficiency per unit of catch. These methods have weaknesses because they do not easily allow temporal and geographical allocation of emissions. A large proportion of fishing and other small commercial vessels are also omitted from global shipping emissions inventories such as the International Maritime Organisation's Greenhouse Gas Studies. This paper demonstrates an activity-based methodology for the production of temporally- and spatially-resolved emissions inventories using data produced by Automatic Identification Systems (AIS). The methodology addresses the issue of how to use AIS data for fleets where not all vessels use AIS technology and how to assign engine load when vessels are towing trawling or dredging gear. The results of this are compared to a fuel-based methodology using publicly available European Commission fisheries data on fuel efficiency and annual catch. The results show relatively good agreement between the two methodologies, with an estimate of 295.7 kilotons of fuel used and 914.4 kilotons of carbon dioxide emitted between May 2012 and May 2013 using the activity-based methodology. Different methods of calculating speed using AIS data are also compared. The results indicate that using the speed data contained directly in the AIS data is preferable to calculating speed from the distance and time interval between consecutive AIS data points.
Doumouras, Aristithes G; Gomez, David; Haas, Barbara; Boyes, Donald M; Nathens, Avery B
2012-09-01
The regionalization of medical services has resulted in improved outcomes and greater compliance with existing guidelines. For certain "time-critical" conditions intimately associated with emergency medicine, early intervention has demonstrated mortality benefits. For these conditions, then, appropriate triage within a regionalized system at first diagnosis is paramount, ideally occurring in the field by emergency medical services (EMS) personnel. Therefore, EMS ground transport access is an important metric in the ongoing evaluation of a regionalized care system for time-critical emergency services. To our knowledge, no studies have demonstrated how methodologies for calculating EMS ground transport access differ in their estimates of access over the same study area for the same resource. This study uses two methodologies to calculate EMS ground transport access to trauma center care in a single study area to explore their manifestations and critically evaluate the differences between the methodologies. Two methodologies were compared in their estimations of EMS ground transport access to trauma center care: a routing methodology (RM) and an as-the-crow-flies methodology (ACFM). These methodologies were adaptations of the only two methodologies that had been previously used in the literature to calculate EMS ground transport access to time-critical emergency services across the United States. The RM and ACFM were applied to the nine Level I and Level II trauma centers within the province of Ontario by creating trauma center catchment areas at 30, 45, 60, and 120 minutes and calculating the population and area encompassed by the catchments. Because the methodologies were identical for measuring air access, this study looks specifically at EMS ground transport access. Catchments for the province were created for each methodology at each time interval, and their populations and areas were significantly different at all time periods. Specifically, the RM calculated significantly larger populations at every time interval while the ACFM calculated larger catchment area sizes. This trend is counterintuitive (i.e., larger catchment should mean higher populations), and it was found to be most disparate at the shortest time intervals (under 60 minutes). Through critical evaluation of the differences, the authors elucidated that the ACFM could calculate road access in areas with no roads and overestimates access in low-density areas compared to the RM, potentially affecting delivery of care decisions. Based on these results, the authors believe that future methodologies for calculating EMS ground transport access must incorporate a continuous and valid route through the road network as well as use travel speeds appropriate to the road segments traveled; alternatively, we feel that variation in methods for calculating road distances would have little effect on realized access. Overall, as more complex models for calculating EMS ground transport access become used, there needs to be a standard methodology to improve and to compare it to. Based on these findings, the authors believe that this should be the RM. © 2012 by the Society for Academic Emergency Medicine.
New methodology for fast prediction of wheel wear evolution
NASA Astrophysics Data System (ADS)
Apezetxea, I. S.; Perez, X.; Casanueva, C.; Alonso, A.
2017-07-01
In railway applications wear prediction in the wheel-rail interface is a fundamental matter in order to study problems such as wheel lifespan and the evolution of vehicle dynamic characteristic with time. However, one of the principal drawbacks of the existing methodologies for calculating the wear evolution is the computational cost. This paper proposes a new wear prediction methodology with a reduced computational cost. This methodology is based on two main steps: the first one is the substitution of the calculations over the whole network by the calculation of the contact conditions in certain characteristic point from whose result the wheel wear evolution can be inferred. The second one is the substitution of the dynamic calculation (time integration calculations) by the quasi-static calculation (the solution of the quasi-static situation of a vehicle at a certain point which is the same that neglecting the acceleration terms in the dynamic equations). These simplifications allow a significant reduction of computational cost to be obtained while maintaining an acceptable level of accuracy (error order of 5-10%). Several case studies are analysed along the paper with the objective of assessing the proposed methodology. The results obtained in the case studies allow concluding that the proposed methodology is valid for an arbitrary vehicle running through an arbitrary track layout.
40 CFR 98.273 - Calculating GHG emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... fossil fuels and combustion of biomass in spent liquor solids. (1) Calculate fossil fuel-based CO2 emissions from direct measurement of fossil fuels consumed and default emissions factors according to the Tier 1 methodology for stationary combustion sources in § 98.33(a)(1). (2) Calculate fossil fuel-based...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad
Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGSmore » electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.« less
76 FR 72134 - Annual Charges for Use of Government Lands
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-22
... revise the methodology used to compute these annual charges. Under the proposed rule, the Commission would create a fee schedule based on the U.S. Bureau of Land Management's (BLM) methodology for calculating rental rates for linear rights of way. This methodology includes a land value per acre, an...
Haddad, S; Tardif, R; Viau, C; Krishnan, K
1999-09-05
Biological hazard index (BHI) is defined as biological level tolerable for exposure to mixture, and is calculated by an equation similar to the conventional hazard index. The BHI calculation, at the present time, is advocated for use in situations where toxicokinetic interactions do not occur among mixture constituents. The objective of this study was to develop an approach for calculating interactions-based BHI for chemical mixtures. The approach consisted of simulating the concentration of exposure indicator in the biological matrix of choice (e.g. venous blood) for each component of the mixture to which workers are exposed and then comparing these to the established BEI values, for calculating the BHI. The simulation of biomarker concentrations was performed using a physiologically-based toxicokinetic (PBTK) model which accounted for the mechanism of interactions among all mixture components (e.g. competitive inhibition). The usefulness of the present approach is illustrated by calculating BHI for varying ambient concentrations of a mixture of three chemicals (toluene (5-40 ppm), m-xylene (10-50 ppm), and ethylbenzene (10-50 ppm)). The results show that the interactions-based BHI can be greater or smaller than that calculated on the basis of additivity principle, particularly at high exposure concentrations. At lower exposure concentrations (e.g. 20 ppm each of toluene, m-xylene and ethylbenzene), the BHI values obtained using the conventional methodology are similar to the interactions-based methodology, confirming that the consequences of competitive inhibition are negligible at lower concentrations. The advantage of the PBTK model-based methodology developed in this study relates to the fact that, the concentrations of individual chemicals in mixtures that will not result in a significant increase in the BHI (i.e. > 1) can be determined by iterative simulation.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Gu, Huidong; Wang, Jian; Aubry, Anne-Françoise; Jiang, Hao; Zeng, Jianing; Easter, John; Wang, Jun-sheng; Dockens, Randy; Bifano, Marc; Burrell, Richard; Arnold, Mark E
2012-06-05
A methodology for the accurate calculation and mitigation of isotopic interferences in liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS) assays and its application in supporting microdose absolute bioavailability studies are reported for the first time. For simplicity, this calculation methodology and the strategy to minimize the isotopic interference are demonstrated using a simple molecule entity, then applied to actual development drugs. The exact isotopic interferences calculated with this methodology were often much less than the traditionally used, overestimated isotopic interferences simply based on the molecular isotope abundance. One application of the methodology is the selection of a stable isotopically labeled internal standard (SIL-IS) for an LC-MS/MS bioanalytical assay. The second application is the selection of an SIL analogue for use in intravenous (i.v.) microdosing for the determination of absolute bioavailability. In the case of microdosing, the traditional approach of calculating isotopic interferences can result in selecting a labeling scheme that overlabels the i.v.-dosed drug or leads to incorrect conclusions on the feasibility of using an SIL drug and analysis by LC-MS/MS. The methodology presented here can guide the synthesis by accurately calculating the isotopic interferences when labeling at different positions, using different selective reaction monitoring (SRM) transitions or adding more labeling positions. This methodology has been successfully applied to the selection of the labeled i.v.-dosed drugs for use in two microdose absolute bioavailability studies, before initiating the chemical synthesis. With this methodology, significant time and cost saving can be achieved in supporting microdose absolute bioavailability studies with stable labeled drugs.
Highway User Benefit Analysis System Research Project #128
DOT National Transportation Integrated Search
2000-10-01
In this research, a methodology for estimating road user costs of various competing alternatives was developed. Also, software was developed to calculate the road user cost, perform economic analysis and update cost tables. The methodology is based o...
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Infiltration modeling guidelines for commercial building energy analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.
This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less
San Luis Basin Sustainability Metrics Project: A Methodology for Evaluating Regional Sustainability
Although there are several scientifically-based sustainability metrics, many are data intensive, difficult to calculate, and fail to capture all aspects of a system. To address these issues, we produced a scientifically-defensible, but straightforward and inexpensive, methodolog...
PCB congener analysis with Hall electrolytic conductivity detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edstrom, R.D.
1989-01-01
This work reports the development of an analytical methodology for the analysis of PCB congeners based on integrating relative retention data provided by other researchers. The retention data were transposed into a multiple retention marker system which provided good precision in the calculation of relative retention indices for PCB congener analysis. Analytical run times for the developed methodology were approximately one hour using a commercially available GC capillary column. A Tracor Model 700A Hall Electrolytic Conductivity Detector (HECD) was employed in the GC detection of Aroclor standards and environmental samples. Responses by the HECD provided good sensitivity and were reasonablymore » predictable. Ten response factors were calculated based on the molar chlorine content of each homolog group. Homolog distributions were determined for Aroclors 1016, 1221, 1232, 1242, 1248, 1254, 1260, 1262 along with binary and ternary mixtures of the same. These distributions were compared with distributions reported by other researchers using electron capture detection as well as chemical ionization mass spectrometric methodologies. Homolog distributions acquired by the HECD methodology showed good correlation with the previously mentioned methodologies. The developed analytical methodology was used in the analysis of bluefish (Pomatomas saltatrix) and weakfish (Cynoscion regalis) collected from the York River, lower James River and lower Chesapeake Bay in Virginia. Total PCB concentrations were calculated and homolog distributions were constructed from the acquired data. Increases in total PCB concentrations were found in the analyzed fish samples during the fall of 1985 collected from the lower James River and lower Chesapeake Bay.« less
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2011 CFR
2011-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2014 CFR
2014-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2012 CFR
2012-10-01
... excluded from the data base used to compute the Federal payment rates. In addition, allowable costs related to exceptions payments under § 413.30(f) are excluded from the data base used to compute the Federal... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses— (i...
DOT National Transportation Integrated Search
2013-11-01
The Highway Capacity Manual (HCM) has had a delay-based level of service methodology for signalized intersections since 1985. : The 2010 HCM has revised the method for calculating delay. This happened concurrent with such jurisdictions as NYC reviewi...
Background for Joint Systems Aspects of AIR 6000
2000-04-01
Checkland’s Soft Systems Methodology [7, 8,9]. The analytical techniques that are proposed for joint systems work are based on calculating probability...Supporting Global Interests 21 DSTO-CR-0155 SLMP Structural Life Management Plan SOW Stand-Off Weapon SSM Soft Systems Methodology UAV Uninhabited Aerial... Systems Methodology in Action, John Wiley & Sons, Chichester, 1990. [101 Pearl, Judea, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
Code of Federal Regulations, 2014 CFR
2014-04-01
... justified by a newly created property-based needs assessment (a life-cycle physical needs assessments... calculated as the sum of total operating cost, modernization cost, and costs to address accrual needs. Costs... assist PHAs in completing the assessments. The spreadsheet calculator is designed to walk housing...
Code of Federal Regulations, 2013 CFR
2013-04-01
... justified by a newly created property-based needs assessment (a life-cycle physical needs assessments... calculated as the sum of total operating cost, modernization cost, and costs to address accrual needs. Costs... assist PHAs in completing the assessments. The spreadsheet calculator is designed to walk housing...
Introducing a new bond reactivity index: Philicities for natural bond orbitals.
Sánchez-Márquez, Jesús; Zorrilla, David; García, Víctor; Fernández, Manuel
2017-12-22
In the present work, a new methodology defined for obtaining reactivity indices (philicities) is proposed. This is based on reactivity functions such as the Fukui function or the dual descriptor, and makes it possible to project the information from reactivity functions onto molecular orbitals, instead of onto the atoms of the molecule (atomic reactivity indices). The methodology focuses on the molecules' natural bond orbitals (bond reactivity indices) because these orbitals have the advantage of being localized, allowing the reaction site of an electrophile or nucleophile to be determined within a very precise molecular region. This methodology provides a "philicity" index for every NBO, and a representative set of molecules has been used to test the new definition. A new methodology has also been developed to compare the "finite difference" and the "frontier molecular orbital" approximations. To facilitate their use, the proposed methodology as well as the possibility of calculating the new indices have been implemented in a new version of UCA-FUKUI software. In addition, condensation schemes based on atomic populations of the "atoms in molecules" theory, the Hirshfeld population analysis, the approximation of Mulliken (with a minimal basis set) and electrostatic potential-derived charges have also been implemented, including the calculation of "bond reactivity indices" defined in previous studies. Graphical abstract A new methodology defined for obtaining bond reactivity indices (philicities) is proposed and makes it possible to project the information from reactivity functions onto molecular orbitals. The proposed methodology as well as the possibility of calculating the new indices have been implemented in a new version of UCA-FUKUI software. In addition, this version can use new atomic condensation schemes and new "utilities" have also been included in this second version.
Mapping Base Modifications in DNA by Transverse-Current Sequencing
NASA Astrophysics Data System (ADS)
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2018-02-01
Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.
Calculation of Shuttle Base Heating Environments and Comparison with Flight Data
NASA Technical Reports Server (NTRS)
Greenwood, T. F.; Lee, Y. C.; Bender, R. L.; Carter, R. E.
1983-01-01
The techniques, analytical tools, and experimental programs used initially to generate and later to improve and validate the Shuttle base heating design environments are discussed. In general, the measured base heating environments for STS-1 through STS-5 were in good agreement with the preflight predictions. However, some changes were made in the methodology after reviewing the flight data. The flight data is described, preflight predictions are compared with the flight data, and improvements in the prediction methodology based on the data are discussed.
Risk ranking of LANL nuclear material storage containers for repackaging prioritization.
Smith, Paul H; Jordan, Hans; Hoffman, Jenifer A; Eller, P Gary; Balkey, Simon
2007-05-01
Safe handling and storage of nuclear material at U.S. Department of Energy facilities relies on the use of robust containers to prevent container breaches and subsequent worker contamination and uptake. The U.S. Department of Energy has no uniform requirements for packaging and storage of nuclear materials other than those declared excess and packaged to DOE-STD-3013-2000. This report describes a methodology for prioritizing a large inventory of nuclear material containers so that the highest risk containers are repackaged first. The methodology utilizes expert judgment to assign respirable fractions and reactivity factors to accountable levels of nuclear material at Los Alamos National Laboratory. A relative risk factor is assigned to each nuclear material container based on a calculated dose to a worker due to a failed container barrier and a calculated probability of container failure based on material reactivity and container age. This risk-based methodology is being applied at LANL to repackage the highest risk materials first and, thus, accelerate the reduction of risk to nuclear material handlers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, A; Devlin, P; Bhagwat, M
Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of themore » clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.« less
The methodology of the gas turbine efficiency calculation
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Job, Marcin; Brzęczek, Mateusz; Nawrat, Krzysztof; Mędrych, Janusz
2016-12-01
In the paper a calculation methodology of isentropic efficiency of a compressor and turbine in a gas turbine installation on the basis of polytropic efficiency characteristics is presented. A gas turbine model is developed into software for power plant simulation. There are shown the calculation algorithms based on iterative model for isentropic efficiency of the compressor and for isentropic efficiency of the turbine based on the turbine inlet temperature. The isentropic efficiency characteristics of the compressor and the turbine are developed by means of the above mentioned algorithms. The gas turbine development for the high compressor ratios was the main driving force for this analysis. The obtained gas turbine electric efficiency characteristics show that an increase of pressure ratio above 50 is not justified due to the slight increase in the efficiency with a significant increase of turbine inlet combustor outlet and temperature.
TensorCalculator: exploring the evolution of mechanical stress in the CCMV capsid
NASA Astrophysics Data System (ADS)
Kononova, Olga; Maksudov, Farkhad; Marx, Kenneth A.; Barsegov, Valeri
2018-01-01
A new computational methodology for the accurate numerical calculation of the Cauchy stress tensor, stress invariants, principal stress components, von Mises and Tresca tensors is developed. The methodology is based on the atomic stress approach which permits the calculation of stress tensors, widely used in continuum mechanics modeling of materials properties, using the output from the MD simulations of discrete atomic and C_α -based coarse-grained structural models of biological particles. The methodology mapped into the software package TensorCalculator was successfully applied to the empty cowpea chlorotic mottle virus (CCMV) shell to explore the evolution of mechanical stress in this mechanically-tested specific example of a soft virus capsid. We found an inhomogeneous stress distribution in various portions of the CCMV structure and stress transfer from one portion of the virus structure to another, which also points to the importance of entropic effects, often ignored in finite element analysis and elastic network modeling. We formulate a criterion for elastic deformation using the first principal stress components. Furthermore, we show that von Mises and Tresca stress tensors can be used to predict the onset of a viral capsid’s mechanical failure, which leads to total structural collapse. TensorCalculator can be used to study stress evolution and dynamics of defects in viral capsids and other large-size protein assemblies.
NASA Astrophysics Data System (ADS)
Sánchez-Márquez, Jesús; Zorrilla, David; García, Víctor; Fernández, Manuel
2018-07-01
This work presents a new development based on the condensation scheme proposed by Chamorro and Pérez, in which new terms to correct the frozen molecular orbital approximation have been introduced (improved frontier molecular orbital approximation). The changes performed on the original development allow taking into account the orbital relaxation effects, providing equivalent results to those achieved by the finite difference approximation and leading also to a methodology with great advantages. Local reactivity indices based on this new development have been obtained for a sample set of molecules and they have been compared with those indices based on the frontier molecular orbital and finite difference approximations. A new definition based on the improved frontier molecular orbital methodology for the dual descriptor index is also shown. In addition, taking advantage of the characteristics of the definitions obtained with the new condensation scheme, the descriptor local philicity is analysed by separating the components corresponding to the frontier molecular orbital approximation and orbital relaxation effects, analysing also the local parameter multiphilic descriptor in the same way. Finally, the effect of using the basis set is studied and calculations using DFT, CI and Möller-Plesset methodologies are performed to analyse the consequence of different electronic-correlation levels.
Accurate Energy Transaction Allocation using Path Integration and Interpolation
NASA Astrophysics Data System (ADS)
Bhide, Mandar Mohan
This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.
Code of Federal Regulations, 2010 CFR
2010-10-01
... patient utilization calendar year as identified from Medicare claims is calendar year 2007. (4) Wage index... calculating the per-treatment base rate for 2011 are as follows: (1) Per patient utilization in CY 2007, 2008..., 2008 or 2009 to determine the year with the lowest per patient utilization. (2) Update of per treatment...
NASA Astrophysics Data System (ADS)
Fekete, Tamás
2018-05-01
Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well-grounded theoretical foundation for a new modeling framework of structural integrity. This paper presents the first findings of the research project.
A novel methodology for interpreting air quality measurements from urban streets using CFD modelling
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming
2011-09-01
In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population exposure studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
[New calculation algorithms in brachytherapy for iridium 192 treatments].
Robert, C; Dumas, I; Martinetti, F; Chargari, C; Haie-Meder, C; Lefkopoulos, D
2018-05-18
Since 1995, the brachytherapy dosimetry protocols follow the methodology recommended by the Task Group 43. This methodology, which has the advantage of being fast, is based on several approximations that are not always valid in clinical conditions. Model-based dose calculation algorithms have recently emerged in treatment planning stations and are considered as a major evolution by allowing for consideration of the patient's finite dimensions, tissue heterogeneities and the presence of high atomic number materials in applicators. In 2012, a report from the American Association of Physicists in Medicine Radiation Therapy Task Group 186 reviews these models and makes recommendations for their clinical implementation. This review focuses on the use of model-based dose calculation algorithms in the context of iridium 192 treatments. After a description of these algorithms and their clinical implementation, a summary of the main questions raised by these new methods is performed. Considerations regarding the choice of the medium used for the dose specification and the recommended methodology for assigning materials characteristics are especially described. In the last part, recent concrete examples from the literature illustrate the capabilities of these new algorithms on clinical cases. Copyright © 2018 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
New Approaches and Applications for Monte Carlo Perturbation Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
Külahci, Fatih; Sen, Zekâi
2009-09-15
The classical solid/liquid distribution coefficient, K(d), for radionuclides in water-sediment systems is dependent on many parameters such as flow, geology, pH, acidity, alkalinity, total hardness, radioactivity concentration, etc. in a region. Considerations of all these effects require a regional analysis with an effective methodology, which has been based on the concept of the cumulative semivariogram concept in this paper. Although classical K(d) calculations are punctual and cannot represent regional pattern, in this paper a regional calculation methodology is suggested through the use of Absolute Point Cumulative SemiVariogram (APCSV) technique. The application of the methodology is presented for (137)Cs and (90)Sr measurements at a set of points in Keban Dam reservoir, Turkey.
Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and
Center: Vehicle Cost Calculator Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Google Bookmark Alternative Fuels
Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions
Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Google Bookmark
Axisymmetric computational fluid dynamics analysis of Saturn V/S1-C/F1 nozzle and plume
NASA Technical Reports Server (NTRS)
Ruf, Joseph H.
1993-01-01
An axisymmetric single engine Computational Fluid Dynamics calculation of the Saturn V/S 1-C vehicle base region and F1 engine plume is described. There were two objectives of this work, the first was to calculate an axisymmetric approximation of the nozzle, plume and base region flow fields of S1-C/F1, relate/scale this to flight data and apply this scaling factor to a NLS/STME axisymmetric calculations from a parallel effort. The second was to assess the differences in F1 and STME plume shear layer development and concentration of combustible gases. This second piece of information was to be input/supporting data for assumptions made in NLS2 base temperature scaling methodology from which the vehicle base thermal environments were being generated. The F1 calculations started at the main combustion chamber faceplate and incorporated the turbine exhaust dump/nozzle film coolant. The plume and base region calculations were made for ten thousand feet and 57 thousand feet altitude at vehicle flight velocity and in stagnant freestream. FDNS was implemented with a 14 species, 28 reaction finite rate chemistry model plus a soot burning model for the RP-1/LOX chemistry. Nozzle and plume flow fields are shown, the plume shear layer constituents are compared to a STME plume. Conclusions are made about the validity and status of the analysis and NLS2 vehicle base thermal environment definition methodology.
Shyam, Sangeetha; Wai, Tony Ng Kock; Arshad, Fatimah
2012-01-01
This paper outlines the methodology to add glycaemic index (GI) and glycaemic load (GL) functionality to food DietPLUS, a Microsoft Excel-based Malaysian food composition database and diet intake calculator. Locally determined GI values and published international GI databases were used as the source of GI values. Previously published methodology for GI value assignment was modified to add GI and GL calculators to the database. Two popular local low GI foods were added to the DietPLUS database, bringing up the total number of foods in the database to 838 foods. Overall, in relation to the 539 major carbohydrate foods in the Malaysian Food Composition Database, 243 (45%) food items had local Malaysian values or were directly matched to International GI database and another 180 (33%) of the foods were linked to closely-related foods in the GI databases used. The mean ± SD dietary GI and GL of the dietary intake of 63 women with previous gestational diabetes mellitus, calculated using DietPLUS version3 were, 62 ± 6 and 142 ± 45, respectively. These values were comparable to those reported from other local studies. DietPLUS version3, a simple Microsoft Excel-based programme aids calculation of diet GI and GL for Malaysian diets based on food records.
USGS Methodology for Assessing Continuous Petroleum Resources
Charpentier, Ronald R.; Cook, Troy A.
2011-01-01
The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.
Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code
NASA Astrophysics Data System (ADS)
Wemple, Charles; Zwermann, Winfried
2017-09-01
Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.
Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S
2012-07-01
The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.
Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas
2017-03-01
Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
Analyzing Problem's Difficulty Based on Neural Networks and Knowledge Map
ERIC Educational Resources Information Center
Kuo, Rita; Lien, Wei-Peng; Chang, Maiga; Heh, Jia-Sheng
2004-01-01
This paper proposes a methodology to calculate both the difficulty of the basic problems and the difficulty of solving a problem. The method to calculate the difficulty of problem is according to the process of constructing a problem, including Concept Selection, Unknown Designation, and Proposition Construction. Some necessary measures observed…
Wang, Nu; Boswell, Paul G
2017-10-20
Gradient retention times are difficult to project from the underlying retention factor (k) vs. solvent composition (φ) relationships. A major reason for this difficulty is that gradients produced by HPLC pumps are imperfect - gradient delay, gradient dispersion, and solvent mis-proportioning are all difficult to account for in calculations. However, we recently showed that a gradient "back-calculation" methodology can measure these imperfections and take them into account. In RPLC, when the back-calculation methodology was used, error in projected gradient retention times is as low as could be expected based on repeatability in the k vs. φ relationships. HILIC, however, presents a new challenge: the selectivity of HILIC columns drift strongly over time. Retention is repeatable in short time, but selectivity frequently drifts over the course of weeks. In this study, we set out to understand if the issue of selectivity drift can be avoid by doing our experiments quickly, and if there any other factors that make it difficult to predict gradient retention times from isocratic k vs. φ relationships when gradient imperfections are taken into account with the back-calculation methodology. While in past reports, the accuracy of retention projections was >5%, the back-calculation methodology brought our error down to ∼1%. This result was 6-43 times more accurate than projections made using ideal gradients and 3-5 times more accurate than the same retention projections made using offset gradients (i.e., gradients that only took gradient delay into account). Still, the error remained higher in our HILIC projections than in RPLC. Based on the shape of the back-calculated gradients, we suspect the higher error is a result of prominent gradient distortion caused by strong, preferential water uptake from the mobile phase into the stationary phase during the gradient - a factor our model did not properly take into account. It appears that, at least with the stationary phase we used, column distortion is an important factor to take into account in retention projection in HILIC that is not usually important in RPLC. Copyright © 2017 Elsevier B.V. All rights reserved.
Probabilistic Based Modeling and Simulation Assessment
2010-06-01
different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the probability of injury...variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment impact, and...first area of focus is to develop a methodology to integrate probabilistic analysis into finite element analysis of vehicle collisions and blast . The
A 3D model retrieval approach based on Bayesian networks lightfield descriptor
NASA Astrophysics Data System (ADS)
Xiao, Qinhan; Li, Yanjun
2009-12-01
A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.
Alternative power supply systems for remote industrial customers
NASA Astrophysics Data System (ADS)
Kharlamova, N. V.; Khalyasmaa, A. I.; Eroshenko, S. A.
2017-06-01
The paper addresses the problem of alternative power supply of remote industrial clusters with renewable electric energy generation. As a result of different technologies comparison, consideration is given to wind energy application. The authors present a methodology of mean expected wind generation output calculation, based on Weibull distribution, which provides an effective express-tool for preliminary assessment of required installed generation capacity. The case study is based on real data including database of meteorological information, relief characteristics, power system topology etc. Wind generation feasibility estimation for a specific territory is followed by power flow calculations using Monte Carlo methodology. Finally, the paper provides a set of recommendations to ensure safe and reliable power supply for the final customers and, subsequently, to provide sustainable development of the regions, located far from megalopolises and industrial centres.
Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2011-11-14
We have recently developed a methodology for the calculation of exchange coupling constants J in weakly interacting polynuclear metal clusters. The method is based on unrestricted and restricted second order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) and is here applied to eight binuclear copper systems. Comparison of the SF-CV(2)-DFT results with experiment and with results obtained from other DFT and wave function based methods has been made. Restricted SF-CV(2)-DFT with the BH&HLYP functional yields consistently J values in excellent agreement with experiment. The results acquired from this scheme are comparable in quality to those obtained by accurate multi-reference wave function methodologies such as difference dedicated configuration interaction and the complete active space with second-order perturbation theory. © 2011 American Institute of Physics
A VaR Algorithm for Warrants Portfolio
NASA Astrophysics Data System (ADS)
Dai, Jun; Ni, Liyun; Wang, Xiangrong; Chen, Weizhong
Based on Gamma Vega-Cornish Fish methodology, this paper propose the algorithm for calculating VaR via adjusting the quantile under the given confidence level using the four moments (e.g. mean, variance, skewness and kurtosis) of the warrants portfolio return and estimating the variance of portfolio by EWMA methodology. Meanwhile, the proposed algorithm considers the attenuation of the effect of history return on portfolio return of future days. Empirical study shows that, comparing with Gamma-Cornish Fish method and standard normal method, the VaR calculated by Gamma Vega-Cornish Fish can improve the effectiveness of forecasting the portfolio risk by virture of considering the Gamma risk and the Vega risk of the warrants. The significance test is conducted on the calculation results by employing two-tailed test developed by Kupiec. Test results show that the calculated VaRs of the warrants portfolio all pass the significance test under the significance level of 5%.
NASA Astrophysics Data System (ADS)
Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.
2018-01-01
The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.
Radiological Characterization Methodology of INEEL Stored RH-TRU Waste from ANL-E
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajiv N. Bhatt
2003-02-01
An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using this methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuan, P.; Bhatt, R.N.
2003-01-14
An Acceptable Knowledge (AK)-based radiological characterization methodology is being developed for RH TRU waste generated from ANL-E hot cell operations performed on fuel elements irradiated in the EBR-II reactor. The methodology relies on AK for composition of the fresh fuel elements, their irradiation history, and the waste generation and collection processes. Radiological characterization of the waste involves the estimates of the quantities of significant fission products and transuranic isotopes in the waste. Methods based on reactor and physics principles are used to achieve these estimates. Because of the availability of AK and the robustness of the calculation methods, the AK-basedmore » characterization methodology offers a superior alternative to traditional waste assay techniques. Using the methodology, it is shown that the radiological parameters of a test batch of ANL-E waste is well within the proposed WIPP Waste Acceptance Criteria limits.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
...--Methodology for Calculating ``on'' or ``off'' Total Unemployment Rate Indicators for Purposes of Determining...'' or ``off'' total unemployment rate (TUR) indicators to determine when extended benefit (EB) periods...-State Extended Benefits Program--Methodology for Calculating ``on'' or ``off'' Total Unemployment Rate...
Analysis and methodology for aeronautical systems technology program planning
NASA Technical Reports Server (NTRS)
White, M. J.; Gershkoff, I.; Lamkin, S.
1983-01-01
A structured methodology was developed that allows the generation, analysis, and rank-ordering of system concepts by their benefits and costs, indicating the preferred order of implementation. The methodology is supported by a base of data on civil transport aircraft fleet growth projections and data on aircraft performance relating the contribution of each element of the aircraft to overall performance. The performance data are used to assess the benefits of proposed concepts. The methodology includes a computer program for performing the calculations needed to rank-order the concepts and compute their cumulative benefit-to-cost ratio. The use of the methodology and supporting data is illustrated through the analysis of actual system concepts from various sources.
Gas flow calculation method of a ramjet engine
NASA Astrophysics Data System (ADS)
Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir
2017-11-01
At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.
77 FR 53059 - Risk-Based Capital Guidelines: Market Risk
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
...The Office of the Comptroller of the Currency (OCC), Board of Governors of the Federal Reserve System (Board), and Federal Deposit Insurance Corporation (FDIC) are revising their market risk capital rules to better capture positions for which the market risk capital rules are appropriate; reduce procyclicality; enhance the rules' sensitivity to risks that are not adequately captured under current methodologies; and increase transparency through enhanced disclosures. The final rule does not include all of the methodologies adopted by the Basel Committee on Banking Supervision for calculating the standardized specific risk capital requirements for debt and securitization positions due to their reliance on credit ratings, which is impermissible under the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. Instead, the final rule includes alternative methodologies for calculating standardized specific risk capital requirements for debt and securitization positions.
Calculation of thermal expansion coefficient of glasses based on topological constraint theory
NASA Astrophysics Data System (ADS)
Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi
2016-10-01
In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.
Variability aware compact model characterization for statistical circuit design optimization
NASA Astrophysics Data System (ADS)
Qiao, Ying; Qian, Kun; Spanos, Costas J.
2012-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowdy, M.W.; Couch, M.D.
A vehicle comparison methodology based on the Otto-Engine Equivalent (OEE) vehicle concept is described. As an illustration of this methodology, the concept is used to make projections of the fuel economy potential of passenger cars using various alternative power systems. Sensitivities of OEE vehicle results to assumptions made in the calculational procedure are discussed. Factors considered include engine torque boundary, rear axle ratio, performance criteria, engine transient response, and transmission shift logic.
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2011 CFR
2011-10-01
....171 of this part, into a single per treatment base rate developed from 2007 claims data. The steps to..., or 2009. CMS removes the effects of enrollment and price growth from total expenditures for 2007...
Optimization of lamp arrangement in a closed-conduit UV reactor based on a genetic algorithm.
Sultan, Tipu; Ahmad, Zeshan; Cho, Jinsoo
2016-01-01
The choice for the arrangement of the UV lamps in a closed-conduit ultraviolet (CCUV) reactor significantly affects the performance. However, a systematic methodology for the optimal lamp arrangement within the chamber of the CCUV reactor is not well established in the literature. In this research work, we propose a viable systematic methodology for the lamp arrangement based on a genetic algorithm (GA). In addition, we analyze the impacts of the diameter, angle, and symmetry of the lamp arrangement on the reduction equivalent dose (RED). The results are compared based on the simulated RED values and evaluated using the computational fluid dynamics simulations software ANSYS FLUENT. The fluence rate was calculated using commercial software UVCalc3D, and the GA-based lamp arrangement optimization was achieved using MATLAB. The simulation results provide detailed information about the GA-based methodology for the lamp arrangement, the pathogen transport, and the simulated RED values. A significant increase in the RED values was achieved by using the GA-based lamp arrangement methodology. This increase in RED value was highest for the asymmetric lamp arrangement within the chamber of the CCUV reactor. These results demonstrate that the proposed GA-based methodology for symmetric and asymmetric lamp arrangement provides a viable technical solution to the design and optimization of the CCUV reactor.
Calculating the habitable zones of multiple star systems with a new interactive Web site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Tobias W. A.; Haghighipour, Nader
We have developed a comprehensive methodology and an interactive Web site for calculating the habitable zone (HZ) of multiple star systems. Using the concept of spectral weight factor, as introduced in our previous studies of the calculations of HZ in and around binary star systems, we calculate the contribution of each star (based on its spectral energy distribution) to the total flux received at the top of the atmosphere of an Earth-like planet, and use the models of the HZ of the Sun to determine the boundaries of the HZ in multiple star systems. Our interactive Web site for carryingmore » out these calculations is publicly available at http://astro.twam.info/hz. We discuss the details of our methodology and present its application to some of the multiple star systems detected by the Kepler space telescope. We also present the instructions for using our interactive Web site, and demonstrate its capabilities by calculating the HZ for two interesting analytical solutions of the three-body problem.« less
Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Potapczuk, Mark G.
1993-01-01
A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.
Mutel, Christopher L; Pfister, Stephan; Hellweg, Stefanie
2012-01-17
We describe a new methodology for performing regionalized life cycle assessment and systematically choosing the spatial scale of regionalized impact assessment methods. We extend standard matrix-based calculations to include matrices that describe the mapping from inventory to impact assessment spatial supports. Uncertainty in inventory spatial data is modeled using a discrete spatial distribution function, which in a case study is derived from empirical data. The minimization of global spatial autocorrelation is used to choose the optimal spatial scale of impact assessment methods. We demonstrate these techniques on electricity production in the United States, using regionalized impact assessment methods for air emissions and freshwater consumption. Case study results show important differences between site-generic and regionalized calculations, and provide specific guidance for future improvements of inventory data sets and impact assessment methods.
Application of Risk-Based Inspection method for gas compressor station
NASA Astrophysics Data System (ADS)
Zhang, Meng; Liang, Wei; Qiu, Zeyang; Lin, Yang
2017-05-01
According to the complex process and lots of equipment, there are risks in gas compressor station. At present, research on integrity management of gas compressor station is insufficient. In this paper, the basic principle of Risk Based Inspection (RBI) and the RBI methodology are studied; the process of RBI in the gas compressor station is developed. The corrosion loop and logistics loop of the gas compressor station are determined through the study of corrosion mechanism and process of the gas compressor station. The probability of failure is calculated by using the modified coefficient, and the consequence of failure is calculated by the quantitative method. In particular, we addressed the application of a RBI methodology in a gas compressor station. The risk ranking is helpful to find the best preventive plan for inspection in the case study.
Dahlgren, Björn; Reif, Maria M; Hünenberger, Philippe H; Hansen, Niels
2012-10-09
The raw ionic solvation free energies calculated on the basis of atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [Kastenholz, M. A.; Hünenberger, P. H. J. Chem. Phys.2006, 124, 224501 and Reif, M. M.; Hünenberger, P. H. J. Chem. Phys.2011, 134, 144104], the application of an appropriate correction scheme allows for a conversion of the methodology-dependent raw data into methodology-independent results. In this work, methodology-independent derivative thermodynamic hydration and aqueous partial molar properties are calculated for the Na(+) and Cl(-) ions at P° = 1 bar and T(-) = 298.15 K, based on the SPC water model and on ion-solvent Lennard-Jones interaction coefficients previously reoptimized against experimental hydration free energies. The hydration parameters considered are the hydration free energy and enthalpy. The aqueous partial molar parameters considered are the partial molar entropy, volume, heat capacity, volume-compressibility, and volume-expansivity. Two alternative calculation methods are employed to access these properties. Method I relies on the difference in average volume and energy between two aqueous systems involving the same number of water molecules, either in the absence or in the presence of the ion, along with variations of these differences corresponding to finite pressure or/and temperature changes. Method II relies on the calculation of the hydration free energy of the ion, along with variations of this free energy corresponding to finite pressure or/and temperature changes. Both methods are used considering two distinct variants in the application of the correction scheme. In variant A, the raw values from the simulations are corrected after the application of finite difference in pressure or/and temperature, based on correction terms specifically designed for derivative parameters at P° and T(-). In variant B, these raw values are corrected prior to differentiation, based on corresponding correction terms appropriate for the different simulation pressures P and temperatures T. The results corresponding to the different calculation schemes show that, except for the hydration free energy itself, accurate methodological independence and quantitative agreement with even the most reliable experimental parameters (ion-pair properties) are not yet reached. Nevertheless, approximate internal consistency and qualitative agreement with experimental results can be achieved, but only when an appropriate correction scheme is applied, along with a careful consideration of standard-state issues. In this sense, the main merit of the present study is to set a clear framework for these types of calculations and to point toward directions for future improvements, with the ultimate goal of reaching a consistent and quantitative description of single-ion hydration thermodynamics in molecular dynamics simulations.
The Otto-engine-equivalent vehicle concept
NASA Technical Reports Server (NTRS)
Dowdy, M. W.; Couch, M. D.
1978-01-01
A vehicle comparison methodology based on the Otto-Engine Equivalent (OEE) vehicle concept is described. As an illustration of this methodology, the concept is used to make projections of the fuel economy potential of passenger cars using various alternative power systems. Sensitivities of OEE vehicle results to assumptions made in the calculational procedure are discussed. Factors considered include engine torque boundary, rear axle ratio, performance criteria, engine transient response, and transmission shift logic.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Considerations on methodological challenges for water footprint calculations.
Thaler, S; Zessner, M; De Lis, F Bertran; Kreuzinger, N; Fehringer, R
2012-01-01
We have investigated how different approaches for water footprint (WF) calculations lead to different results, taking sugar beet production and sugar refining as examples. To a large extent, results obtained from any WF calculation are reflective of the method used and the assumptions made. Real irrigation data for 59 European sugar beet growing areas showed inadequate estimation of irrigation water when a widely used simple approach was used. The method resulted in an overestimation of blue water and an underestimation of green water usage. Dependent on the chosen (available) water quality standard, the final grey WF can differ up to a factor of 10 and more. We conclude that further development and standardisation of the WF is needed to reach comparable and reliable results. A special focus should be on standardisation of the grey WF methodology based on receiving water quality standards.
Locational Marginal Pricing in the Campus Power System at the Power Distribution Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Gu, Yi; Zhang, Yingchen
2016-11-14
In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon
2015-01-01
This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.
The added value of thorough economic evaluation of telemedicine networks.
Le Goff-Pronost, Myriam; Sicotte, Claude
2010-02-01
This paper proposes a thorough framework for the economic evaluation of telemedicine networks. A standard cost analysis methodology was used as the initial base, similar to the evaluation method currently being applied to telemedicine, and to which we suggest adding subsequent stages that enhance the scope and sophistication of the analytical methodology. We completed the methodology with a longitudinal and stakeholder analysis, followed by the calculation of a break-even threshold, a calculation of the economic outcome based on net present value (NPV), an estimate of the social gain through external effects, and an assessment of the probability of social benefits. In order to illustrate the advantages, constraints and limitations of the proposed framework, we tested it in a paediatric cardiology tele-expertise network. The results demonstrate that the project threshold was not reached after the 4 years of the study. Also, the calculation of the project's NPV remained negative. However, the additional analytical steps of the proposed framework allowed us to highlight alternatives that can make this service economically viable. These included: use over an extended period of time, extending the network to other telemedicine specialties, or including it in the services offered by other community hospitals. In sum, the results presented here demonstrate the usefulness of an economic evaluation framework as a way of offering decision makers the tools they need to make comprehensive evaluations of telemedicine networks.
NASA Astrophysics Data System (ADS)
Zobin, V. M.; Cruz-Bravo, A. A.; Ventura-Ramírez, F.
2010-06-01
A macroseismic methodology of seismic risk microzonation in a low-rise city based on the vulnerability of residential buildings is proposed and applied to Colima city, Mexico. The seismic risk microzonation for Colima consists of two elements: the mapping of residential blocks according to their vulnerability level and the calculation of an expert-opinion based damage probability matrix (DPM) for a given level of earthquake intensity and a given type of residential block. A specified exposure time to the seismic risk for this zonation is equal to the interval between two destructive earthquakes. The damage probability matrices were calculated for three types of urban buildings and five types of residential blocks in Colima. It was shown that only 9% of 1409 residential blocks are able to resist to the Modify Mercalli (MM) intensity VII and VIII earthquakes without significant damage. The proposed DPM-2007 is in good accordance with the experimental damage curves based on the macroseismic evaluation of 3332 residential buildings in Colima that was carried out after the 21 January 2003 intensity MM VII earthquake. This methodology and the calculated PDM-2007 curves may be applied also to seismic risk microzonation for many low-rise cities in Latin America, Asia, and Africa.
42 CFR 484.230 - Methodology used for the calculation of the low-utilization payment adjustment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Methodology used for the calculation of the low... Prospective Payment System for Home Health Agencies § 484.230 Methodology used for the calculation of the low... amount is determined by using cost data set forth in § 484.210(a) and adjusting by the appropriate wage...
Methodological choices affect cancer incidence rates: a cohort study.
Brooke, Hannah L; Talbäck, Mats; Feychting, Maria; Ljung, Rickard
2017-01-19
Incidence rates are fundamental to epidemiology, but their magnitude and interpretation depend on methodological choices. We aimed to examine the extent to which the definition of the study population affects cancer incidence rates. All primary cancer diagnoses in Sweden between 1958 and 2010 were identified from the national Cancer Register. Age-standardized and age-specific incidence rates of 29 cancer subtypes between 2000 and 2010 were calculated using four definitions of the study population: persons resident in Sweden 1) based on general population statistics; 2) with no previous subtype-specific cancer diagnosis; 3) with no previous cancer diagnosis except non-melanoma skin cancer; and 4) with no previous cancer diagnosis of any type. We calculated absolute and relative differences between methods. Age-standardized incidence rates calculated using general population statistics ranged from 6% lower (prostate cancer, incidence rate difference: -13.5/100,000 person-years) to 8% higher (breast cancer in women, incidence rate difference: 10.5/100,000 person-years) than incidence rates based on individuals with no previous subtype-specific cancer diagnosis. Age-standardized incidence rates in persons with no previous cancer of any type were up to 10% lower (bladder cancer in women) than rates in those with no previous subtype-specific cancer diagnosis; however, absolute differences were <5/100,000 person-years for all cancer subtypes. For some cancer subtypes incidence rates vary depending on the definition of the study population. For these subtypes, standardized incidence ratios calculated using general population statistics could be misleading. Moreover, etiological arguments should be used to inform methodological choices during study design.
Fellinger, Michael R.; Hector, Jr., Louis G.; Trinkle, Dallas R.
2016-11-29
Here, we present computed datasets on changes in the lattice parameter and elastic stiffness coefficients of BCC Fe due to substitutional Al, B, Cu, Mn, and Si solutes, and octahedral interstitial C and N solutes. The data is calculated using the methodology based on density functional theory (DFT). All the DFT calculations were performed using the Vienna Ab initio Simulations Package (VASP). The data is stored in the NIST dSpace repository.
NASA Astrophysics Data System (ADS)
Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.
2018-04-01
A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.
Fuzzy Set Methods for Object Recognition in Space Applications
NASA Technical Reports Server (NTRS)
Keller, James M. (Editor)
1992-01-01
Progress on the following four tasks is described: (1) fuzzy set based decision methodologies; (2) membership calculation; (3) clustering methods (including derivation of pose estimation parameters), and (4) acquisition of images and testing of algorithms.
Health economic assessment: a methodological primer.
Simoens, Steven
2009-12-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments.
Health Economic Assessment: A Methodological Primer
Simoens, Steven
2009-01-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments. PMID:20049237
NASA Astrophysics Data System (ADS)
Fulkerson, David E.
2010-02-01
This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.
NASA Astrophysics Data System (ADS)
Ben Mosbah, Abdallah
In order to improve the qualities of wind tunnel tests, and the tools used to perform aerodynamic tests on aircraft wings in the wind tunnel, new methodologies were developed and tested on rigid and flexible wings models. A flexible wing concept is consists in replacing a portion (lower and/or upper) of the skin with another flexible portion whose shape can be changed using an actuation system installed inside of the wing. The main purpose of this concept is to improve the aerodynamic performance of the aircraft, and especially to reduce the fuel consumption of the airplane. Numerical and experimental analyses were conducted to develop and test the methodologies proposed in this thesis. To control the flow inside the test sections of the Price-Paidoussis wind tunnel of LARCASE, numerical and experimental analyses were performed. Computational fluid dynamics calculations have been made in order to obtain a database used to develop a new hybrid methodology for wind tunnel calibration. This approach allows controlling the flow in the test section of the Price-Paidoussis wind tunnel. For the fast determination of aerodynamic parameters, new hybrid methodologies were proposed. These methodologies were used to control flight parameters by the calculation of the drag, lift and pitching moment coefficients and by the calculation of the pressure distribution around an airfoil. These aerodynamic coefficients were calculated from the known airflow conditions such as angles of attack, the mach and the Reynolds numbers. In order to modify the shape of the wing skin, electric actuators were installed inside the wing to get the desired shape. These deformations provide optimal profiles according to different flight conditions in order to reduce the fuel consumption. A controller based on neural networks was implemented to obtain desired displacement actuators. A metaheuristic algorithm was used in hybridization with neural networks, and support vector machine approaches and their combination was optimized, and very good results were obtained in a reduced computing time. The validation of the obtained results has been made using numerical data obtained by the XFoil code, and also by the Fluent code. The results obtained using the methodologies presented in this thesis have been validated with experimental data obtained using the subsonic Price-Paidoussis blow down wind tunnel.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
... Wage Rule revised the methodology by which we calculate the prevailing wages to be paid to H-2B workers... methodology by which we calculate the prevailing wages to be paid to H-2B workers and United States (U.S... concerning the calculation of the prevailing wage rate in the H-2B program. CATA v. Solis, Dkt. No. 103-1...
NASA Astrophysics Data System (ADS)
Lowe, Benjamin M.; Skylaris, Chris-Kriton; Green, Nicolas G.; Shibuta, Yasushi; Sakata, Toshiya
2018-04-01
Continuum-based methods are important in calculating electrostatic properties of interfacial systems such as the electric field and surface potential but are incapable of providing sufficient insight into a range of fundamentally and technologically important phenomena which occur at atomistic length-scales. In this work a molecular dynamics methodology is presented for interfacial electric field and potential calculations. The silica–water interface was chosen as an example system, which is highly relevant for understanding the response of field-effect transistors sensors (FET sensors). Detailed validation work is presented, followed by the simulated surface charge/surface potential relationship. This showed good agreement with experiment at low surface charge density but at high surface charge density the results highlighted challenges presented by an atomistic definition of the surface potential. This methodology will be used to investigate the effect of surface morphology and biomolecule addition; both factors which are challenging using conventional continuum models.
Hirschi, Jennifer S.; Takeya, Tetsuya; Hang, Chao; Singleton, Daniel A.
2009-01-01
We suggest here and evaluate a methodology for the measurement of specific interatomic distances from a combination of theoretical calculations and experimentally measured 13C kinetic isotope effects. This process takes advantage of a broad diversity of transition structures available for the epoxidation of 2-methyl-2-butene with oxaziridines. From the isotope effects calculated for these transition structures, a theory-independent relationship between the C-O bond distances of the newly forming bonds and the isotope effects is established. Within the precision of the measurement, this relationship in combination with the experimental isotope effects provides a highly accurate picture of the C-O bonds forming at the transition state. The diversity of transition structures also allows an evaluation of the Schramm process for defining transition state geometries based on calculations at non-stationary points, and the methodology is found to be reasonably accurate. PMID:19146405
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment
ERIC Educational Resources Information Center
Finch, Holmes; Monahan, Patrick
2008-01-01
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Delgado, J; Liao, J C
1992-01-01
The methodology previously developed for determining the Flux Control Coefficients [Delgado & Liao (1992) Biochem. J. 282, 919-927] is extended to the calculation of metabolite Concentration Control Coefficients. It is shown that the transient metabolite concentrations are related by a few algebraic equations, attributed to mass balance, stoichiometric constraints, quasi-equilibrium or quasi-steady states, and kinetic regulations. The coefficients in these relations can be estimated using linear regression, and can be used to calculate the Control Coefficients. The theoretical basis and two examples are discussed. Although the methodology is derived based on the linear approximation of enzyme kinetics, it yields reasonably good estimates of the Control Coefficients for systems with non-linear kinetics. PMID:1497632
Fracture mechanism maps in unirradiated and irradiated metals and alloys
NASA Astrophysics Data System (ADS)
Li, Meimei; Zinkle, S. J.
2007-04-01
This paper presents a methodology for computing a fracture mechanism map in two-dimensional space of tensile stress and temperature using physically-based constitutive equations. Four principal fracture mechanisms were considered: cleavage fracture, low temperature ductile fracture, transgranular creep fracture, and intergranular creep fracture. The methodology was applied to calculate fracture mechanism maps for several selected reactor materials, CuCrZr, 316 type stainless steel, F82H ferritic-martensitic steel, V4Cr4Ti and Mo. The calculated fracture maps are in good agreement with empirical maps obtained from experimental observations. The fracture mechanism maps of unirradiated metals and alloys were modified to include radiation hardening effects on cleavage fracture and high temperature helium embrittlement. Future refinement of fracture mechanism maps is discussed.
This paper provides the EPA Combined Heat and Power Partnership's recommended methodology for calculating fuel and carbon dioxide emissions savings from CHP compared to SHP, which serves as the basis for the EPA's CHP emissions calculator.
Using State Estimation Residuals to Detect Abnormal SCADA Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Chen, Yousu; Huang, Zhenyu
2010-04-30
Detection of abnormal supervisory control and data acquisition (SCADA) data is critically important for safe and secure operation of modern power systems. In this paper, a methodology of abnormal SCADA data detection based on state estimation residuals is presented. Preceded with a brief overview of outlier detection methods and bad SCADA data detection for state estimation, the framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection algorithm. The BACON algorithm ismore » applied to the outlier detection task. The IEEE 118-bus system is used as a test base to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less
Functional-diversity indices can be driven by methodological choices and species richness.
Poos, Mark S; Walker, Steven C; Jackson, Donald A
2009-02-01
Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less
BATSE gamma-ray burst line search. 2: Bayesian consistency methodology
NASA Technical Reports Server (NTRS)
Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.
1994-01-01
We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.
Cardiac Mean Electrical Axis in Thoroughbreds—Standardization by the Dubois Lead Positioning System
da Costa, Cássia Fré; Samesima, Nelson; Pastore, Carlos Alberto
2017-01-01
Background Different methodologies for electrocardiographic acquisition in horses have been used since the first ECG recordings in equines were reported early in the last century. This study aimed to determine the best ECG electrodes positioning method and the most reliable calculation of mean cardiac axis (MEA) in equines. Materials and Methods We evaluated the electrocardiographic profile of 53 clinically healthy Thoroughbreds, 38 males and 15 females, with ages ranging 2–7 years old, all reared at the São Paulo Jockey Club, in Brazil. Two ECG tracings were recorded from each animal, one using the Dubois lead positioning system, the second using the base-apex method. QRS complex amplitudes were analyzed to obtain MEA values in the frontal plane for each of the two electrode positioning methods mentioned above, using two calculation approaches, the first by Tilley tables and the second by trigonometric calculation. Results were compared between the two methods. Results There was significant difference in cardiac axis values: MEA obtained by the Tilley tables was +135.1° ± 90.9° vs. -81.1° ± 3.6° (p<0.0001), and by trigonometric calculation it was -15.0° ± 11.3° vs. -79.9° ± 7.4° (p<0.0001), base-apex and Dubois, respectively. Furthermore, Dubois method presented small range of variation without statistical or clinical difference by either calculation mode, while there was a wide variation in the base-apex method. Conclusion Dubois improved centralization of the Thoroughbreds' hearts, engendering what seems to be the real frontal plane. By either calculation mode, it was the most reliable methodology to obtain cardiac mean electrical axis in equines. PMID:28095442
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-04
... calculates the CBOE Gold ETF Volatility Index (``GVZ''), which is based on the VIX methodology applied to options on the SPDR Gold Trust (``GLD''). The current filing would permit $0.50 strike price intervals for...
NASA Astrophysics Data System (ADS)
Guler Yigitoglu, Askin
In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.
FY16 Status Report on Development of Integrated EPP and SMT Design Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jetter, R. I.; Sham, T. -L.; Wang, Y.
2016-08-01
The goal of the Elastic-Perfectly Plastic (EPP) combined integrated creep-fatigue damage evaluation approach is to incorporate a Simplified Model Test (SMT) data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The EPP methodology is based on the idea that creep damage and strain accumulation can be bounded by a properly chosen “pseudo” yield strength used in an elastic-perfectly plastic analysis, thus avoiding the need for stress classification. The originalmore » SMT approach is based on the use of elastic analysis. The experimental data, cycles to failure, is correlated using the elastically calculated strain range in the test specimen and the corresponding component strain is also calculated elastically. The advantage of this approach is that it is no longer necessary to use the damage interaction, or D-diagram, because the damage due to the combined effects of creep and fatigue are accounted in the test data by means of a specimen that is designed to replicate or bound the stress and strain redistribution that occurs in actual components when loaded in the creep regime. The reference approach to combining the two methodologies and the corresponding uncertainties and validation plans are presented. Results from recent key feature tests are discussed to illustrate the applicability of the EPP methodology and the behavior of materials at elevated temperature when undergoing stress and strain redistribution due to plasticity and creep.« less
Solute effect on basal and prismatic slip systems of Mg.
Moitra, Amitava; Kim, Seong-Gon; Horstemeyer, M F
2014-11-05
In an effort to design novel magnesium (Mg) alloys with high ductility, we present a first principles data based on the Density Functional Theory (DFT). The DFT was employed to calculate the generalized stacking fault energy curves, which can be used in the generalized Peierls-Nabarro (PN) model to study the energetics of basal slip and prismatic slip in Mg with and without solutes to calculate continuum scale dislocation core widths, stacking fault widths and Peierls stresses. The generalized stacking fault energy curves for pure Mg agreed well with other DFT calculations. Solute effects on these curves were calculated for nine alloying elements, namely Al, Ca, Ce, Gd, Li, Si, Sn, Zn and Zr, which allowed the strength and ductility to be qualitatively estimated based on the basal dislocation properties. Based on our multiscale methodology, a suggestion has been made to improve Mg formability.
ERIC Educational Resources Information Center
Calder Stegemann, Kim; Grünke, Matthias
2014-01-01
Number sense is critical to the development of higher order mathematic abilities. However, some children have difficulty acquiring these fundamental skills and the knowledge base of effective interventions/remediation is relatively limited. Based on emerging neuro-scientific research which has identified the association between finger…
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2014 CFR
2014-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2013 CFR
2013-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2011 CFR
2011-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2010 CFR
2010-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 412.424 - Methodology for calculating the Federal per diem payment amount.
Code of Federal Regulations, 2012 CFR
2012-10-01
... facilities located in a rural area as defined in § 412.402. (iii) Teaching adjustment. CMS adjusts the Federal per diem base rate by a factor to account for indirect teaching costs. (A) An inpatient psychiatric facility's teaching adjustment is based on the ratio of the number of full-time equivalent...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-STAGE RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.337 Methodology for calculating the...
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-STAGE RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.337 Methodology for calculating the...
Ab-Initio Molecular Dynamics Simulations of Molten Ni-Based Superalloys (Preprint)
2011-10-01
in liquid–metal density with composition and temperature across the solidification zone. Here, fundamental properties of molten Ni -based alloys ...temperature across the solidification zone. Here, fundamental properties of molten Ni -based alloys , required for modeling these instabilities, are...temperature is assessed in model Ni -Al-W and RENE-N4 alloys . Calculations are performed using a recently implemented constant pressure methodology (NPT) which
A CLIMATOLOGY OF WATER BUDGET VARIABLE FOR THE NORTHEASTERN UNITED STATES
A Climatology of Water Budget Variables for the Northeast United States (Leathers and Robinson 1995). Climatic division precipitation and temperature data are used to calculate water budget variables based on the Thornthwaite/Mather climatic water budget methodology. Two water b...
A methodology for reduced order modeling and calibration of the upper atmosphere
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; Linares, Richard
2017-10-01
Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.
Holmes, Sean T; Alkan, Fahri; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil
2016-07-05
(29) Si and (31) P magnetic-shielding tensors in covalent network solids have been evaluated using periodic and cluster-based calculations. The cluster-based computational methodology employs pseudoatoms to reduce the net charge (resulting from missing co-ordination on the terminal atoms) through valence modification of terminal atoms using bond-valence theory (VMTA/BV). The magnetic-shielding tensors computed with the VMTA/BV method are compared to magnetic-shielding tensors determined with the periodic GIPAW approach. The cluster-based all-electron calculations agree with experiment better than the GIPAW calculations, particularly for predicting absolute magnetic shielding and for predicting chemical shifts. The performance of the DFT functionals CA-PZ, PW91, PBE, rPBE, PBEsol, WC, and PBE0 are assessed for the prediction of (29) Si and (31) P magnetic-shielding constants. Calculations using the hybrid functional PBE0, in combination with the VMTA/BV approach, result in excellent agreement with experiment. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Kholod, N; Evans, M; Gusev, E; Yu, S; Malyshev, V; Tretyakova, S; Barinov, A
2016-03-15
This paper presents a methodology for calculating exhaust emissions from on-road transport in cities with low-quality traffic data and outdated vehicle registries. The methodology consists of data collection approaches and emission calculation methods. For data collection, the paper suggests using video survey and parking lot survey methods developed for the International Vehicular Emissions model. Additional sources of information include data from the largest transportation companies, vehicle inspection stations, and official vehicle registries. The paper suggests using the European Computer Programme to Calculate Emissions from Road Transport (COPERT) 4 model to calculate emissions, especially in countries that implemented European emissions standards. If available, the local emission factors should be used instead of the default COPERT emission factors. The paper also suggests additional steps in the methodology to calculate emissions only from diesel vehicles. We applied this methodology to calculate black carbon emissions from diesel on-road vehicles in Murmansk, Russia. The results from Murmansk show that diesel vehicles emitted 11.7 tons of black carbon in 2014. The main factors determining the level of emissions are the structure of the vehicle fleet and the level of vehicle emission controls. Vehicles without controls emit about 55% of black carbon emissions. Copyright © 2015 Elsevier B.V. All rights reserved.
Using State Estimation Residuals to Detect Abnormal SCADA Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Chen, Yousu; Huang, Zhenyu
2010-06-14
Detection of manipulated supervisory control and data acquisition (SCADA) data is critically important for the safe and secure operation of modern power systems. In this paper, a methodology of detecting manipulated SCADA data based on state estimation residuals is presented. A framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection process. The BACON algorithm is applied to detect outliers in the state estimation residuals. The IEEE 118-bus system is used asmore » a test case to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less
A low power biomedical signal processor ASIC based on hardware software codesign.
Nie, Z D; Wang, L; Chen, W G; Zhang, T; Zhang, Y T
2009-01-01
A low power biomedical digital signal processor ASIC based on hardware and software codesign methodology was presented in this paper. The codesign methodology was used to achieve higher system performance and design flexibility. The hardware implementation included a low power 32bit RISC CPU ARM7TDMI, a low power AHB-compatible bus, and a scalable digital co-processor that was optimized for low power Fast Fourier Transform (FFT) calculations. The co-processor could be scaled for 8-point, 16-point and 32-point FFTs, taking approximate 50, 100 and 150 clock circles, respectively. The complete design was intensively simulated using ARM DSM model and was emulated by ARM Versatile platform, before conducted to silicon. The multi-million-gate ASIC was fabricated using SMIC 0.18 microm mixed-signal CMOS 1P6M technology. The die area measures 5,000 microm x 2,350 microm. The power consumption was approximately 3.6 mW at 1.8 V power supply and 1 MHz clock rate. The power consumption for FFT calculations was less than 1.5 % comparing with the conventional embedded software-based solution.
10 CFR 300.9 - Reporting and recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... these guidelines, all reports must conform to the measurement methods established by the Technical... justification. (3) If a change in calculation methods (for inventories or reductions) is made for a particular year, the reporting entity must, if feasible, revise its base value to assure methodological...
10 CFR 300.9 - Reporting and recordkeeping requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... these guidelines, all reports must conform to the measurement methods established by the Technical... justification. (3) If a change in calculation methods (for inventories or reductions) is made for a particular year, the reporting entity must, if feasible, revise its base value to assure methodological...
10 CFR 300.9 - Reporting and recordkeeping requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... these guidelines, all reports must conform to the measurement methods established by the Technical... justification. (3) If a change in calculation methods (for inventories or reductions) is made for a particular year, the reporting entity must, if feasible, revise its base value to assure methodological...
10 CFR 300.9 - Reporting and recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... these guidelines, all reports must conform to the measurement methods established by the Technical... justification. (3) If a change in calculation methods (for inventories or reductions) is made for a particular year, the reporting entity must, if feasible, revise its base value to assure methodological...
10 CFR 300.9 - Reporting and recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... these guidelines, all reports must conform to the measurement methods established by the Technical... justification. (3) If a change in calculation methods (for inventories or reductions) is made for a particular year, the reporting entity must, if feasible, revise its base value to assure methodological...
Medical Problem-Solving: A Critique of the Literature.
ERIC Educational Resources Information Center
McGuire, Christine H.
1985-01-01
Prescriptive, decision-analysis of medical problem-solving has been based on decision theory that involves calculation and manipulation of complex probability and utility values to arrive at optimal decisions that will maximize patient benefits. The studies offer a methodology for improving clinical judgment. (Author/MLW)
NASA Astrophysics Data System (ADS)
McJannet, D. L.; Cook, F. J.; McGloin, R. P.; McGowan, H. A.; Burn, S.
2011-05-01
The use of scintillometers to determine sensible and latent heat flux is becoming increasingly common because of their ability to quantify convective fluxes over distances of hundreds of meters to several kilometers. The majority of investigations using scintillometry have focused on processes above land surfaces, but here we propose a new methodology for obtaining sensible and latent heat fluxes from a scintillometer deployed over open water. This methodology has been tested by comparison with eddy covariance measurements and through comparison with alternative scintillometer calculation approaches that are commonly used in the literature. The methodology is based on linearization of the Bowen ratio, which is a common assumption in models such as Penman's model and its derivatives. Comparison of latent heat flux estimates from the eddy covariance system and the scintillometer showed excellent agreement across a range of weather conditions and flux rates, giving a high level of confidence in scintillometry-derived latent heat fluxes. The proposed approach produced better estimates than other scintillometry calculation methods because of the reliance of alternative methods on measurements of water temperature or water body heat storage, which are both notoriously hard to quantify. The proposed methodology requires less instrumentation than alternative scintillometer calculation approaches, and the spatial scales of required measurements are arguably more compatible. In addition to scintillometer measurements of the structure parameter of the refractive index of air, the only measurements required are atmospheric pressure, air temperature, humidity, and wind speed at one height over the water body.
RadShield: semiautomated shielding design using a floor plan driven graphical user interface
Wu, Dee H.; Yang, Kai; Rutel, Isaac B.
2016-01-01
The purpose of this study was to introduce and describe the development of RadShield, a Java‐based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air‐kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry‐based approach and a manual approach. A series of geometry‐based equations were derived giving the maximum air‐kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)‐certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air‐kerma rate was compared against the geometry‐based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry‐based approach and RadShield's approach in finding the magnitude and location of the maximum air‐kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheterization labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air‐kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X‐ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air‐kerma rate or barrier thickness. PACS number(s): 87.55.N, 87.52.‐g, 87.59.Bh, 87.57.‐s PMID:27685128
RadShield: semiautomated shielding design using a floor plan driven graphical user interface.
DeLorenzo, Matthew C; Wu, Dee H; Yang, Kai; Rutel, Isaac B
2016-09-08
The purpose of this study was to introduce and describe the development of RadShield, a Java-based graphical user interface (GUI), which provides a base design that uniquely performs thorough, spatially distributed calculations at many points and reports the maximum air-kerma rate and barrier thickness for each barrier pursuant to NCRP Report 147 methodology. Semiautomated shielding design calculations are validated by two approaches: a geometry-based approach and a manual approach. A series of geometry-based equations were derived giv-ing the maximum air-kerma rate magnitude and location through a first derivative root finding approach. The second approach consisted of comparing RadShield results with those found by manual shielding design by an American Board of Radiology (ABR)-certified medical physicist for two clinical room situations: two adjacent catheterization labs, and a radiographic and fluoroscopic (R&F) exam room. RadShield's efficacy in finding the maximum air-kerma rate was compared against the geometry-based approach and the overall shielding recommendations by RadShield were compared against the medical physicist's shielding results. Percentage errors between the geometry-based approach and RadShield's approach in finding the magnitude and location of the maximum air-kerma rate was within 0.00124% and 14 mm. RadShield's barrier thickness calculations were found to be within 0.156 mm lead (Pb) and 0.150 mm lead (Pb) for the adjacent catheteriza-tion labs and R&F room examples, respectively. However, within the R&F room example, differences in locating the most sensitive calculation point on the floor plan for one of the barriers was not considered in the medical physicist's calculation and was revealed by the RadShield calculations. RadShield is shown to accurately find the maximum values of air-kerma rate and barrier thickness using NCRP Report 147 methodology. Visual inspection alone of the 2D X-ray exam distribution by a medical physicist may not be sufficient to accurately select the point of maximum air-kerma rate or barrier thickness. © 2016 The Authors.
NASA Astrophysics Data System (ADS)
Guo, Yang; Becker, Ute; Neese, Frank
2018-03-01
Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.
Probabilistic assessment methodology for continuous-type petroleum accumulations
Crovelli, R.A.
2003-01-01
The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
Code of Federal Regulations, 2014 CFR
2014-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
Code of Federal Regulations, 2012 CFR
2012-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
Code of Federal Regulations, 2011 CFR
2011-01-01
... persons outside the United States. (b) These rates are based on aviation safety inspector time rather than calculating a separate rate for managerial or clerical time because the inspector is the individual performing the actual service. Charging for inspector time, while building in all costs into the rate base...
Simoens, Steven
2013-01-01
Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474
Simoens, Steven
2013-01-01
This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation.
NASA Astrophysics Data System (ADS)
Kupchikova, N. V.; Kurbatskiy, E. N.
2017-11-01
This paper presents a methodology for the analytical research solutions for the work pile foundations with surface broadening and inclined side faces in the ground array, based on the properties of Fourier transform of finite functions. The comparative analysis of the calculation results using the suggested method for prismatic piles, piles with surface broadening prismatic with precast piles and end walls with precast wedges on the surface is described.
Seidu, Issaka; Zhekova, Hristina R; Seth, Michael; Ziegler, Tom
2012-03-08
The performance of the second-order spin-flip constricted variational density functional theory (SF-CV(2)-DFT) for the calculation of the exchange coupling constant (J) is assessed by application to a series of triply bridged Cu(II) dinuclear complexes. A comparison of the J values based on SF-CV(2)-DFT with those obtained by the broken symmetry (BS) DFT method and experiment is provided. It is demonstrated that our methodology constitutes a viable alternative to the BS-DFT method. The strong dependence of the calculated exchange coupling constants on the applied functionals is demonstrated. Both SF-CV(2)-DFT and BS-DFT affords the best agreement with experiment for hybrid functionals.
A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks.
Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua
2015-12-17
Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism.
A Security Assessment Mechanism for Software-Defined Networking-Based Mobile Networks
Luo, Shibo; Dong, Mianxiong; Ota, Kaoru; Wu, Jun; Li, Jianhua
2015-01-01
Software-Defined Networking-based Mobile Networks (SDN-MNs) are considered the future of 5G mobile network architecture. With the evolving cyber-attack threat, security assessments need to be performed in the network management. Due to the distinctive features of SDN-MNs, such as their dynamic nature and complexity, traditional network security assessment methodologies cannot be applied directly to SDN-MNs, and a novel security assessment methodology is needed. In this paper, an effective security assessment mechanism based on attack graphs and an Analytic Hierarchy Process (AHP) is proposed for SDN-MNs. Firstly, this paper discusses the security assessment problem of SDN-MNs and proposes a methodology using attack graphs and AHP. Secondly, to address the diversity and complexity of SDN-MNs, a novel attack graph definition and attack graph generation algorithm are proposed. In order to quantify security levels, the Node Minimal Effort (NME) is defined to quantify attack cost and derive system security levels based on NME. Thirdly, to calculate the NME of an attack graph that takes the dynamic factors of SDN-MN into consideration, we use AHP integrated with the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) as the methodology. Finally, we offer a case study to validate the proposed methodology. The case study and evaluation show the advantages of the proposed security assessment mechanism. PMID:26694409
Methodology to model the energy and greenhouse gas emissions of electronic software distributions.
Williams, Daniel R; Tang, Yinshan
2012-01-17
A new electronic software distribution (ESD) life cycle analysis (LCA) methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative, physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO(2)e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO(2)e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Assessing value-based health care delivery for haemodialysis.
Parra, Eduardo; Arenas, María Dolores; Alonso, Manuel; Martínez, María Fernanda; Gamen, Ángel; Aguarón, Juan; Escobar, María Teresa; Moreno-Jiménez, José María; Alvarez-Ude, Fernando
2017-06-01
Disparities in haemodialysis outcomes among centres have been well-documented. Besides, attempts to assess haemodialysis results have been based on non-comprehensive methodologies. This study aimed to develop a comprehensive methodology for assessing haemodialysis centres, based on the value of health care. The value of health care is defined as the patient benefit from a specific medical intervention per monetary unit invested (Value = Patient Benefit/Cost). This study assessed the value of health care and ranked different haemodialysis centres. A nephrology quality management group identified the criteria for the assessment. An expert group composed of stakeholders (patients, clinicians and managers) agreed on the weighting of each variable, considering values and preferences. Multi-criteria methodology was used to analyse the data. Four criteria and their weights were identified: evidence-based clinical performance measures = 43 points; yearly mortality = 27 points; patient satisfaction = 13 points; and health-related quality of life = 17 points (100-point scale). Evidence-based clinical performance measures included five sub-criteria, with respective weights, including: dialysis adequacy; haemoglobin concentration; mineral and bone disorders; type of vascular access; and hospitalization rate. The patient benefit was determined from co-morbidity-adjusted results and corresponding weights. The cost of each centre was calculated as the average amount expended per patient per year. The study was conducted in five centres (1-5). After adjusting for co-morbidity, value of health care was calculated, and the centres were ranked. A multi-way sensitivity analysis that considered different weights (10-60% changes) and costs (changes of 10% in direct and 30% in allocated costs) showed that the methodology was robust. The rankings: 4-5-3-2-1 and 4-3-5-2-1 were observed in 62.21% and 21.55%, respectively, of simulations, when weights were varied by 60%. Value assessments may integrate divergent stakeholder perceptions, create a context for improvement and aid in policy-making decisions. © 2015 John Wiley & Sons, Ltd.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-10-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-01-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638
Elastic interactions between two-dimensional geometric defects
NASA Astrophysics Data System (ADS)
Moshe, Michael; Sharon, Eran; Kupferman, Raz
2015-12-01
In this paper, we introduce a methodology applicable to a wide range of localized two-dimensional sources of stress. This methodology is based on a geometric formulation of elasticity. Localized sources of stress are viewed as singular defects—point charges of the curvature associated with a reference metric. The stress field in the presence of defects can be solved using a scalar stress function that generalizes the classical Airy stress function to the case of materials with nontrivial geometry. This approach allows the calculation of interaction energies between various types of defects. We apply our methodology to two physical systems: shear-induced failure of amorphous materials and the mechanical interaction between contracting cells.
Möhler, Christian; Wohlfahrt, Patrick; Richter, Christian; Greilich, Steffen
2017-06-01
Electron density is the most important tissue property influencing photon and ion dose distributions in radiotherapy patients. Dual-energy computed tomography (DECT) enables the determination of electron density by combining the information on photon attenuation obtained at two different effective x-ray energy spectra. Most algorithms suggested so far use the CT numbers provided after image reconstruction as input parameters, i.e., are imaged-based. To explore the accuracy that can be achieved with these approaches, we quantify the intrinsic methodological and calibration uncertainty of the seemingly simplest approach. In the studied approach, electron density is calculated with a one-parametric linear superposition ('alpha blending') of the two DECT images, which is shown to be equivalent to an affine relation between the photon attenuation cross sections of the two x-ray energy spectra. We propose to use the latter relation for empirical calibration of the spectrum-dependent blending parameter. For a conclusive assessment of the electron density uncertainty, we chose to isolate the purely methodological uncertainty component from CT-related effects such as noise and beam hardening. Analyzing calculated spectrally weighted attenuation coefficients, we find universal applicability of the investigated approach to arbitrary mixtures of human tissue with an upper limit of the methodological uncertainty component of 0.2%, excluding high-Z elements such as iodine. The proposed calibration procedure is bias-free and straightforward to perform using standard equipment. Testing the calibration on five published data sets, we obtain very small differences in the calibration result in spite of different experimental setups and CT protocols used. Employing a general calibration per scanner type and voltage combination is thus conceivable. Given the high suitability for clinical application of the alpha-blending approach in combination with a very small methodological uncertainty, we conclude that further refinement of image-based DECT-algorithms for electron density assessment is not advisable. © 2017 American Association of Physicists in Medicine.
Gyrokinetic modelling of the quasilinear particle flux for plasmas with neutral-beam fuelling
NASA Astrophysics Data System (ADS)
Narita, E.; Honda, M.; Nakata, M.; Yoshida, M.; Takenaga, H.; Hayashi, N.
2018-02-01
A quasilinear particle flux is modelled based on gyrokinetic calculations. The particle flux is estimated by determining factors, namely, coefficients of off-diagonal terms and a particle diffusivity. In this paper, the methodology to estimate the factors is presented using a subset of JT-60U plasmas. First, the coefficients of off-diagonal terms are estimated by linear gyrokinetic calculations. Next, to obtain the particle diffusivity, a semi-empirical approach is taken. Most experimental analyses for particle transport have assumed that turbulent particle fluxes are zero in the core region. On the other hand, even in the stationary state, the plasmas in question have a finite turbulent particle flux due to neutral-beam fuelling. By combining estimates of the experimental turbulent particle flux and the coefficients of off-diagonal terms calculated earlier, the particle diffusivity is obtained. The particle diffusivity should reflect a saturation amplitude of instabilities. The particle diffusivity is investigated in terms of the effects of the linear instability and linear zonal flow response, and it is found that a formula including these effects roughly reproduces the particle diffusivity. The developed framework for prediction of the particle flux is flexible to add terms neglected in the current model. The methodology to estimate the quasilinear particle flux requires so low computational cost that a database consisting of the resultant coefficients of off-diagonal terms and particle diffusivity can be constructed to train a neural network. The development of the methodology is the first step towards a neural-network-based particle transport model for fast prediction of the particle flux.
Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.
2014-01-01
Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303
Sustainability assessment in forest management based on individual preferences.
Martín-Fernández, Susana; Martinez-Falero, Eugenio
2018-01-15
This paper presents a methodology to elicit the preferences of any individual in the assessment of sustainable forest management at the stand level. The elicitation procedure was based on the comparison of the sustainability of pairs of forest locations. A sustainability map of the whole territory was obtained according to the individual's preferences. Three forest sustainability indicators were pre-calculated for each point in a study area in a Scots pine forest in the National Park of Sierra de Guadarrama in the Madrid Region in Spain to obtain the best management plan with the sustainability map. We followed a participatory process involving fifty people to assess the sustainability of the forest management and the methodology. The results highlighted the demand for conservative forest management, the usefulness of the methodology for managers, and the importance and necessity of incorporating stakeholders into forestry decision-making processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Development and Application of Health-Based Screening Levels for Use in Water-Quality Assessments
Toccalino, Patricia L.
2007-01-01
Health-Based Screening Levels (HBSLs) are non-enforceable water-quality benchmarks that were developed by the U.S. Geological Survey in collaboration with the U.S. Environmental Protection Agency (USEPA) and others. HBSLs supplement existing Federal drinking-water standards and guidelines, thereby providing a basis for a more comprehensive evaluation of contaminant-occurrence data in the context of human health. Since the original methodology used to calculate HBSLs for unregulated contaminants was published in 2003, revisions have been made to the HBSL methodology in order to reflect updates to relevant USEPA policies. These revisions allow for the use of the most recent, USEPA peer-reviewed, publicly available human-health toxicity information in the development of HBSLs. This report summarizes the revisions to the HBSL methodology for unregulated contaminants, and updates the guidance on the use of HBSLs for interpreting water-quality data in the context of human health.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-30
... year (FY) 2012. The Wage Rule revised the methodology by which we calculate the prevailing wages to be... 19, 2011, 76 FR 3452. The Wage Rule revised the methodology by which we calculate the prevailing... November 30, 2011. When the Wage Rule goes into effect, it will supersede and make null the prevailing wage...
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Introducing Hurst exponent in pair trading
NASA Astrophysics Data System (ADS)
Ramos-Requena, J. P.; Trinidad-Segovia, J. E.; Sánchez-Granero, M. A.
2017-12-01
In this paper we introduce a new methodology for pair trading. This new method is based on the calculation of the Hurst exponent of a pair. Our approach is inspired by the classical concepts of co-integration and mean reversion but joined under a unique strategy. We will show how Hurst approach presents better results than classical Distance Method and Correlation strategies in different scenarios. Results obtained prove that this new methodology is consistent and suitable by reducing the drawdown of trading over the classical ones getting as a result a better performance.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.
1997-01-01
A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seung Jun; Buechler, Cynthia Eileen
The current study aims to predict the steady state power of a generic solution vessel and to develop a corresponding heat transfer coefficient correlation for a Moly99 production facility by conducting a fully coupled multi-physics simulation. A prediction of steady state power for the current application is inherently interconnected between thermal hydraulic characteristics (i.e. Multiphase computational fluid dynamics solved by ANSYS-Fluent 17.2) and the corresponding neutronic behavior (i.e. particle transport solved by MCNP6.2) in the solution vessel. Thus, the development of a coupling methodology is vital to understand the system behavior at a variety of system design and postulated operatingmore » scenarios. In this study, we report on the k-effective (keff) calculation for the baseline solution vessel configuration with a selected solution concentration using MCNP K-code modeling. The associated correlation of thermal properties (e.g. density, viscosity, thermal conductivity, specific heat) at the selected solution concentration are developed based on existing experimental measurements in the open literature. The numerical coupling methodology between multiphase CFD and MCNP is successfully demonstrated, and the detailed coupling procedure is documented. In addition, improved coupling methods capturing realistic physics in the solution vessel thermal-neutronic dynamics are proposed and tested further (i.e. dynamic height adjustment, mull-cell approach). As a key outcome of the current study, a multi-physics coupling methodology between MCFD and MCNP is demonstrated and tested for four different operating conditions. Those different operating conditions are determined based on the neutron source strength at a fixed geometry condition. The steady state powers for the generic solution vessel at various operating conditions are reported, and a generalized correlation of the heat transfer coefficient for the current application is discussed. The assessment of multi-physics methodology and preliminary results from various coupled calculations (power prediction and heat transfer coefficient) can be further utilized for the system code validation and generic solution vessel design improvement.« less
New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.
Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María
2017-08-01
In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.
An approach to quantitative sustainability assessment in the early stages of process design.
Tugnoli, Alessandro; Santarelli, Francesco; Cozzani, Valerio
2008-06-15
A procedure was developed for the quantitative assessment of key performance indicators suitable for the sustainability analysis of alternative processes, mainly addressing the early stages of process design. The methodology was based on the calculation of a set of normalized impact indices allowing a direct comparison of the additional burden of each process alternative on a selected reference area. Innovative reference criteria were developed to compare and aggregate the impact indicators on the basis of the site-specific impact burden and sustainability policy. An aggregation procedure also allows the calculation of overall sustainability performance indicators and of an "impact fingerprint" of each process alternative. The final aim of the method is to support the decision making process during process development, providing a straightforward assessment of the expected sustainability performances. The application of the methodology to case studies concerning alternative waste disposal processes allowed a preliminary screening of the expected critical sustainability impacts of each process. The methodology was shown to provide useful results to address sustainability issues in the early stages of process design.
Cook, Troy A.
2013-01-01
Estimated ultimate recoveries (EURs) are a key component in determining productivity of wells in continuous-type oil and gas reservoirs. EURs form the foundation of a well-performance-based assessment methodology initially developed by the U.S. Geological Survey (USGS; Schmoker, 1999). This methodology was formally reviewed by the American Association of Petroleum Geologists Committee on Resource Evaluation (Curtis and others, 2001). The EUR estimation methodology described in this paper was used in the 2013 USGS assessment of continuous oil resources in the Bakken and Three Forks Formations and incorporates uncertainties that would not normally be included in a basic decline-curve calculation. These uncertainties relate to (1) the mean time before failure of the entire well-production system (excluding economics), (2) the uncertainty of when (and if) a stable hyperbolic-decline profile is revealed in the production data, (3) the particular formation involved, (4) relations between initial production rates and a stable hyperbolic-decline profile, and (5) the final behavior of the decline extrapolation as production becomes more dependent on matrix storage.
Modified Methodology for Projecting Coastal Louisiana Land Changes over the Next 50 Years
Hartley, Steve B.
2009-01-01
The coastal Louisiana landscape is continually undergoing geomorphologic changes (in particular, land loss); however, after the 2005 hurricane season, the changes were intensified because of Hurricanes Katrina and Rita. The amount of land loss caused by the 2005 hurricane season was 42 percent (562 km2) of the total land loss (1,329 km2) that was projected for the next 50 years in the Louisiana Coastal Area (LCA), Louisiana Ecosystem Restoration Study. The purpose of this study is to provide information on potential changes to coastal Louisiana by using a revised LCA study methodology. In the revised methodology, we used classified Landsat TM satellite imagery from 1990, 2001, 2004, and 2006 to calculate the 'background' or ambient land-water change rates but divided the Louisiana coastal area differently on the basis of (1) geographic regions ('subprovinces') and (2) specific homogeneous habitat types. Defining polygons by subprovinces (1, Pontchartrain Basin; 2, Barataria Basin; 3, Vermilion/Terrebonne Basins; and 4, the Chenier Plain area) allows for a specific erosion rate to be applied to that area. Further subdividing the provinces by habitat type allows for specific erosion rates for a particular vegetation type to be applied. Our modified methodology resulted in 24 polygons rather than the 183 that were used in the LCA study; further, actively managed areas and the CWPPRA areas were not masked out and dealt with separately as in the LCA study. This revised methodology assumes that erosion rates for habitat types by subprovince are under the influence of similar environmental conditions (sediment depletion, subsidence, and saltwater intrusion). Background change rate for three time periods (1990-2001, 1990-2004, and 1990-2006) were calculated by taking the difference in water or land among each time period and dividing it by the time interval. This calculation gives an annual change rate for each polygon per time period. Change rates for each time period were then used to compute the projected change in each subprovince and habitat type over 50 years by using the same compound rate functions used in the LCA study. The resulting maps show projected land changes based on the revised methodology and inclusion of damage by Hurricanes Katrina and Rita. Comparison of projected land change values between the LCA study and this study shows that this revised methodology - that is, using a reduced polygon subset (reduced from 183 to 24) based on habitat type and subprovince - can be used as a quick projection of land loss.
Optimization of permanent breast seed implant dosimetry incorporating tissue heterogeneity
NASA Astrophysics Data System (ADS)
Mashouf, Shahram
Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose around brachytherapy sources is based on the AAPM TG43 formalism, which generates the dose in homogeneous water medium. Recently, AAPM task group no. 186 (TG186) emphasized the importance of accounting for heterogeneities. In this work we introduce an analytical dose calculation algorithm in heterogeneous media using CT images. The advantages over other methods are computational efficiency and the ease of integration into clinical use. An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of the source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. The dose distributions obtained through applying ICF to TG43 protocol agreed very well with those of Monte Carlo simulations and experiments in all phantoms. In all cases, the mean relative error was reduced by at least a factor of two when ICF correction factor was applied to the TG43 protocol. In conclusion we have developed a new analytical dose calculation method, which enables personalized dose calculations in heterogeneous media using CT images. The methodology offers several advantages including the use of standard TG43 formalism, fast calculation time and extraction of the ICF parameters directly from Hounsfield Units. The methodology was implemented into our clinical treatment planning system where a cohort of 140 patients were processed to study the clinical benefits of a heterogeneity corrected dose.
Shin, Min-Ho; Kim, Hyo-Jun; Kim, Young-Joo
2017-02-20
We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.
a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Dhanda, A.; Remondino, F.; Santana Quintero, M.
2018-05-01
This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.
Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix
Pigni, M. T.; Croft, S.; Gauld, I. C.
2016-04-25
Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less
A model for inventory of ammonia emissions from agriculture in the Netherlands
NASA Astrophysics Data System (ADS)
Velthof, G. L.; van Bruggen, C.; Groenestein, C. M.; de Haan, B. J.; Hoogeveen, M. W.; Huijsmans, J. F. M.
2012-01-01
Agriculture is the major source of ammonia (NH 3). Methodologies are needed to quantify national NH 3 emissions and to identify the most effective options to mitigate NH 3 emissions. Generally, NH 3 emissions from agriculture are quantified using a nitrogen (N) flow approach, in which the NH 3 emission is calculated from the N flows and NH 3 emission factors. Because of the direct dependency between NH 3 volatilization and Total Ammoniacal N (TAN; ammonium-N + N compounds readily broken down to ammonium) an approach based on TAN is preferred to calculate NH 3 emission instead of an approach based on total N. A TAN-based NH 3-inventory model was developed, called NEMA (National Emission Model for Ammonia). The total N excretion and the fraction of TAN in the excreted N are calculated from the feed composition and N digestibility of the components. TAN-based emission factors were derived or updated for housing systems, manure storage outside housing, manure application techniques, N fertilizer types, and grazing. The NEMA results show that the total NH 3 emission from agriculture in the Netherlands in 2009 was 88.8 Gg NH 3-N, of which 50% from housing, 37% from manure application, 9% from mineral N fertilizer, 3% from outside manure storage, and 1% from grazing. Cattle farming was the dominant source of NH 3 in the Netherlands (about 50% of the total NH 3 emission). The NH 3 emission expressed as percentage of the excreted N was 22% of the excreted N for poultry, 20% for pigs, 15% for cattle, and 12% for other livestock, which is mainly related to differences in emissions from housing systems. The calculated ammonia emission was most sensitive to changes in the fraction of TAN in the excreted manure and to the emission factor of manure application. From 2011, NEMA will be used as official methodology to calculate the national NH 3 emission from agriculture in the Netherlands.
Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings
NASA Astrophysics Data System (ADS)
Hunt, H. E. M.
1996-05-01
A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.
NASA Astrophysics Data System (ADS)
Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.
1997-02-01
The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.
Probability calculations for three-part mineral resource assessments
Ellefsen, Karl J.
2017-06-27
Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
Methodological studies on the VVER-440 control assembly calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hordosy, G.; Kereszturi, A.; Maraczy, C.
1995-12-31
The control assembly regions of VVER-440 reactors are represented by 2-group albedo matrices in the global calculations of the KARATE code system. Some methodological aspects of calculating albedo matrices with the COLA transport code are presented. Illustrations are given how these matrices depend on the relevant parameters describing the boron steel and steel regions of the control assemblies. The calculation of the response matrix for a node consisting of two parts filled with different materials is discussed.
76 FR 71431 - Civil Penalty Calculation Methodology
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-17
... DEPARTMENT OF TRANSPORTATION Federal Motor Carrier Safety Administration Civil Penalty Calculation... is currently evaluating its civil penalty methodology. Part of this evaluation includes a forthcoming... civil penalties. UFA takes into account the statutory penalty factors under 49 U.S.C. 521(b)(2)(D). The...
NASA Astrophysics Data System (ADS)
Smith, William R.; Jirsák, Jan; Nezbeda, Ivo; Qi, Weikai
2017-07-01
The calculation of caloric properties such as heat capacity, Joule-Thomson coefficients, and the speed of sound by classical force-field-based molecular simulation methodology has received scant attention in the literature, particularly for systems composed of complex molecules whose force fields (FFs) are characterized by a combination of intramolecular and intermolecular terms. The calculation of a thermodynamic property for a system whose molecules are described by such a FF involves the calculation of the residual property prior to its addition to the corresponding ideal-gas property, the latter of which is separately calculated, either using thermochemical compilations or nowadays accurate quantum mechanical calculations. Although the simulation of a volumetric residual property proceeds by simply replacing the intermolecular FF in the rigid molecule case by the total (intramolecular plus intermolecular) FF, this is not the case for a caloric property. We describe the correct methodology required to perform such calculations and illustrate it in this paper for the case of the internal energy and the enthalpy and their corresponding molar heat capacities. We provide numerical results for cP, one of the most important caloric properties. We also consider approximations to the correct calculation procedure previously used in the literature and illustrate their consequences for the examples of the relatively simple molecule 2-propanol, CH3CH(OH)CH3, and for the more complex molecule monoethanolamine, HO(CH2)2NH2, an important fluid used in carbon capture.
Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services
Rajabi, A; Dabiri, A
2012-01-01
Background Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. Methods: To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171
NASA Technical Reports Server (NTRS)
Walters, Robert; Summers, Geoffrey P.; Warmer. Keffreu J/; Messenger, Scott; Lorentzen, Justin R.; Morton, Thomas; Taylor, Stephen J.; Evans, Hugh; Heynderickx, Daniel; Lei, Fan
2007-01-01
This paper presents a method for using the SPENVIS on-line computational suite to implement the displacement damage dose (D(sub d)) methodology for calculating end-of-life (EOL) solar cell performance for a specific space mission. This paper builds on our previous work that has validated the D(sub d) methodology against both measured space data [1,2] and calculations performed using the equivalent fluence methodology developed by NASA JPL [3]. For several years, the space solar community has considered general implementation of the D(sub d) method, but no computer program exists to enable this implementation. In a collaborative effort, NRL, NASA and OAI have produced the Solar Array Verification and Analysis Tool (SAVANT) under NASA funding, but this program has not progressed beyond the beta-stage [4]. The SPENVIS suite with the Multi Layered Shielding Simulation Software (MULASSIS) contains all of the necessary components to implement the Dd methodology in a format complementary to that of SAVANT [5]. NRL is currently working with ESA and BIRA to include the Dd method of solar cell EOL calculations as an integral part of SPENVIS. This paper describes how this can be accomplished.
NASA Astrophysics Data System (ADS)
Giap, Huan Bosco
Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an ^{131}I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of -16.3% to 4.4%. Volume quantitation errors ranged from -4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3 -D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues.
The Field Production of Water for Injection
1985-12-01
L/day Bedridden Patient 0.75 L/day Average Diseased Patient 0.50 L/day e (There is no feasible methodology to forecast the number of procedures per... Bedridden Patient 0.75 All Diseased Patients 0.50 An estimate of the liters/day needed may be calculated based on a forecasted patient stream, including
Code of Federal Regulations, 2010 CFR
2010-10-01
... the methodology and data used to calculate the updated Federal per diem base payment amount. (b)(1... maintain the appropriate outlier percentage. (e) Describe the ICD-9-CM coding changes and DRG... psychiatric facilities for which the fiscal intermediary obtains inaccurate or incomplete data with which to...
Estimating Household Travel Energy Consumption in Conjunction with a Travel Demand Forecasting Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garikapati, Venu M.; You, Daehyun; Zhang, Wenwen
This paper presents a methodology for the calculation of the consumption of household travel energy at the level of the traffic analysis zone (TAZ) in conjunction with information that is readily available from a standard four-step travel demand model system. This methodology embeds two algorithms. The first provides a means of allocating non-home-based trips to residential zones that are the source of such trips, whereas the second provides a mechanism for incorporating the effects of household vehicle fleet composition on fuel consumption. The methodology is applied to the greater Atlanta, Georgia, metropolitan region in the United States and is foundmore » to offer a robust mechanism for calculating the footprint of household travel energy at the level of the individual TAZ; this mechanism makes possible the study of variations in the energy footprint across space. The travel energy footprint is strongly correlated with the density of the built environment, although socioeconomic differences across TAZs also likely contribute to differences in travel energy footprints. The TAZ-level calculator of the footprint of household travel energy can be used to analyze alternative futures and relate differences in the energy footprint to differences in a number of contributing factors and thus enables the design of urban form, formulation of policy interventions, and implementation of awareness campaigns that may produce more-sustainable patterns of energy consumption.« less
Application of machine learning methodology for pet-based definition of lung cancer
Kerhet, A.; Small, C.; Quon, H.; Riauka, T.; Schrader, L.; Greiner, R.; Yee, D.; McEwan, A.; Roa, W.
2010-01-01
We applied a learning methodology framework to assist in the threshold-based segmentation of non-small-cell lung cancer (nsclc) tumours in positron-emission tomography–computed tomography (pet–ct) imaging for use in radiotherapy planning. Gated and standard free-breathing studies of two patients were independently analysed (four studies in total). Each study had a pet–ct and a treatment-planning ct image. The reference gross tumour volume (gtv) was identified by two experienced radiation oncologists who also determined reference standardized uptake value (suv) thresholds that most closely approximated the gtv contour on each slice. A set of uptake distribution-related attributes was calculated for each pet slice. A machine learning algorithm was trained on a subset of the pet slices to cope with slice-to-slice variation in the optimal suv threshold: that is, to predict the most appropriate suv threshold from the calculated attributes for each slice. The algorithm’s performance was evaluated using the remainder of the pet slices. A high degree of geometric similarity was achieved between the areas outlined by the predicted and the reference suv thresholds (Jaccard index exceeding 0.82). No significant difference was found between the gated and the free-breathing results in the same patient. In this preliminary work, we demonstrated the potential applicability of a machine learning methodology as an auxiliary tool for radiation treatment planning in nsclc. PMID:20179802
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-02-01
The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.
Niaksu, Olegas; Zaptorius, Jonas
2014-01-01
This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.
NASA Astrophysics Data System (ADS)
Didier, Delaunay; Baptiste, Pignon; Nicolas, Boyard; Vincent, Sobotka
2018-05-01
Heat transfer during the cooling of a thermoplastic injected part directly affects the solidification of the polymer and consequently the quality of the part in term of mechanical properties, geometric tolerance and surface aspect. This paper proposes to mold designers a methodology based on analytical models to provide quickly the time to reach the ejection temperature depending of the temperature and the position of cooling channels. The obtained cooling time is the first step of the thermal conception of the mold. The presented methodology is dedicated to the determination of solidification time of a semi-crystalline polymer slab. It allows the calculation of the crystallization time of the part and is based on the analytical solution of the Stefan problem in a semi-infinite medium. The crystallization is then considered as a phase change with an effective crystallization temperature, which is obtained from Fast Scanning Calorimetry (FSC) results. The crystallization time is then corrected to take the finite thickness of the part into account. To check the accuracy of such approach, the solidification time is calculated by solving the heat conduction equation coupled to the crystallization kinetics of the polymer. The impact of the nature of the contact between the polymer and the mold is evaluated. The thermal contact resistance (TCR) appears as significant parameter that needs to be taken into account in the cooling time calculation. The results of the simplified model including or not TCR are compared in the case of a polypropylene (PP) with experiments carried out with an instrumented mold. Then, the methodology is applied for a part made with PolyEtherEtherKetone (PEEK).
Dexter, Franklin; Abouleish, Amr E; Epstein, Richard H; Whitten, Charles W; Lubarsky, David A
2003-10-01
Potential benefits to reducing turnover times are both quantitative (e.g., complete more cases and reduce staffing costs) and qualitative (e.g., improve professional satisfaction). Analyses have shown the quantitative arguments to be unsound except for reducing staffing costs. We describe a methodology by which each surgical suite can use its own numbers to calculate its individual potential reduction in staffing costs from reducing its turnover times. Calculations estimate optimal allocated operating room (OR) time (based on maximizing OR efficiency) before and after reducing the maximum and average turnover times. At four academic tertiary hospitals, reductions in average turnover times of 3 to 9 min would result in 0.8% to 1.8% reductions in staffing cost. Reductions in average turnover times of 10 to 19 min would result in 2.5% to 4.0% reductions in staffing costs. These reductions in staffing cost are achieved predominantly by reducing allocated OR time, not by reducing the hours that staff work late. Heads of anesthesiology groups often serve on OR committees that are fixated on turnover times. Rather than having to argue based on scientific studies, this methodology provides the ability to show the specific quantitative effects (small decreases in staffing costs and allocated OR time) of reducing turnover time using a surgical suite's own data. Many anesthesiologists work at hospitals where surgeons and/or operating room (OR) committees focus repeatedly on turnover time reduction. We developed a methodology by which the reductions in staffing cost as a result of turnover time reduction can be calculated for each facility using its own data. Staffing cost reductions are generally very small and would be achieved predominantly by reducing allocated OR time to the surgeons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, H.K.; Novak, T.
2008-03-15
During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less
Fogolari, Federico; Moroni, Elisabetta; Wojciechowski, Marcin; Baginski, Maciej; Ragona, Laura; Molinari, Henriette
2005-04-01
The pH-driven opening and closure of beta-lactoglobulin EF loop, acting as a lid and closing the internal cavity of the protein, has been studied by molecular dynamics (MD) simulations and free energy calculations based on molecular mechanics/Poisson-Boltzmann (PB) solvent-accessible surface area (MM/PBSA) methodology. The forms above and below the transition pH differ presumably only in the protonation state of residue Glu89. MM/PBSA calculations are able to reproduce qualitatively the thermodynamics of the transition. The analysis of MD simulations using a combination of MM/PBSA methodology and the colony energy approach is able to highlight the driving forces implied in the transition. The analysis suggests that global rearrangements take place before the equilibrium local conformation is reached. This conclusion may bear general relevance to conformational transitions in all lipocalins and proteins in general. (c) 2005 Wiley-Liss, Inc.
Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A
2013-11-01
To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.
Algorithm for evaluating the effectiveness of a high-rise development project based on current yield
NASA Astrophysics Data System (ADS)
Soboleva, Elena
2018-03-01
The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weitz, R.; Thomas, C.; Klemm, J.
1982-03-03
External radiation doses are reconstructed for crews of support and target ships of Joint Task Force One at Operation CROSSROADS, 1946. Volume I describes the reconstruction methodology, which consists of modeling the radiation environment, to include the radioactivity of lagoon water, target ships, and support ship contamination; retracing ship paths through this environment; and calculating the doses to shipboard personnel. The USS RECLAIMER, a support ship, is selected as a representative ship to demonstrate this methodology. Doses for all other ships are summarized. Volume II (Appendix A) details the results for target ship personnel. Volume III (Appendix B) details themore » results for support ship personnel. Calculated doses for more than 36,000 personnel aboard support ships while at Bikini range from zero to 1.7 rem. Of those approximately 34,000 are less than 0.5 rem. From the models provided, doses due to target ship reboarding and doses accrued after departure from Bikini can be calculated, based on the individual circumstances of exposure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weitz, R.; Thomas, C.; Klemm, J.
1982-03-03
External radiation doses are reconstructed for crews of support and target ships of Joint Task Force One at Operation CROSSROADS, 1946. Volume I describes the reconstruction methodology, which consists of modeling the radiation environment, to include the radioactivity of lagoon water, target ships, and support ship contamination; retracing ship paths through this environment; and calculating the doses to shipboard personnel. The USS RECLAIMER, a support ship, is selected as a representative ship to demonstrate this methodology. Doses for all other ships are summarized. Volume II (Appendix A) details the results for target ship personnel. Volume III (Appendix B) details themore » results for support ship personnel. Calculated doses for more than 36,000 personnel aboard support ships while at Bikini range from zero to 1.7 rem. Of those, approximately 34,000 are less than 0.5 rem. From the models provided, doses due to target ship reboarding and doses accrued after departure from Bikini can be calculated, based on the individual circumstances of exposure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, Camilla Dunham; McNeil, Michael; Dunham_Whitehead, Camilla
2008-02-28
The U.S. Environmental Protection Agency (EPA) influences the market for plumbing fixtures and fittings by encouraging consumers to purchase products that carry the WaterSense label, which certifies those products as performing at low flow rates compared to unlabeled fixtures and fittings. As consumers decide to purchase water-efficient products, water consumption will decline nationwide. Decreased water consumption should prolong the operating life of water and wastewater treatment facilities.This report describes the method used to calculate national water savings attributable to EPA?s WaterSense program. A Microsoft Excel spreadsheet model, the National Water Savings (NWS) analysis model, accompanies this methodology report. Version 1.0more » of the NWS model evaluates indoor residential water consumption. Two additional documents, a Users? Guide to the spreadsheet model and an Impacts Report, accompany the NWS model and this methodology document. Altogether, these four documents represent Phase One of this project. The Users? Guide leads policy makers through the spreadsheet options available for projecting the water savings that result from various policy scenarios. The Impacts Report shows national water savings that will result from differing degrees of market saturation of high-efficiency water-using products.This detailed methodology report describes the NWS analysis model, which examines the effects of WaterSense by tracking the shipments of products that WaterSense has designated as water-efficient. The model estimates market penetration of products that carry the WaterSense label. Market penetration is calculated for both existing and new construction. The NWS model estimates savings based on an accounting analysis of water-using products and of building stock. Estimates of future national water savings will help policy makers further direct the focus of WaterSense and calculate stakeholder impacts from the program.Calculating the total gallons of water the WaterSense program saves nationwide involves integrating two components, or modules, of the NWS model. Module 1 calculates the baseline national water consumption of typical fixtures, fittings, and appliances prior to the program (as described in Section 2.0 of this report). Module 2 develops trends in efficiency for water-using products both in the business-as-usual case and as a result of the program (Section 3.0). The NWS model combines the two modules to calculate total gallons saved by the WaterSense program (Section 4.0). Figure 1 illustrates the modules and the process involved in modeling for the NWS model analysis.The output of the NWS model provides the base case for each end use, as well as a prediction of total residential indoor water consumption during the next two decades. Based on the calculations described in Section 4.0, we can project a timeline of water savings attributable to the WaterSense program. The savings increase each year as the program results in the installation of greater numbers of efficient products, which come to compose more and more of the product stock in households throughout the United States.« less
NASA Astrophysics Data System (ADS)
Zacharias, Marios; Giustino, Feliciano
2016-08-01
Recently, Zacharias et al. [Phys. Rev. Lett. 115, 177401 (2015), 10.1103/PhysRevLett.115.177401] developed an ab initio theory of temperature-dependent optical absorption spectra and band gaps in semiconductors and insulators. In that work, the zero-point renormalization and the temperature dependence were obtained by sampling the nuclear wave functions using a stochastic approach. In the present work, we show that the stochastic sampling of Zacharias et al. can be replaced by fully deterministic supercell calculations based on a single optimal configuration of the atomic positions. We demonstrate that a single calculation is able to capture the temperature-dependent band-gap renormalization including quantum nuclear effects in direct-gap and indirect-gap semiconductors, as well as phonon-assisted optical absorption in indirect-gap semiconductors. In order to demonstrate this methodology, we calculate from first principles the temperature-dependent optical absorption spectra and the renormalization of direct and indirect band gaps in silicon, diamond, and gallium arsenide, and we obtain good agreement with experiment and with previous calculations. In this work we also establish the formal connection between the Williams-Lax theory of optical transitions and the related theories of indirect absorption by Hall, Bardeen, and Blatt, and of temperature-dependent band structures by Allen and Heine. The present methodology enables systematic ab initio calculations of optical absorption spectra at finite temperature, including both direct and indirect transitions. This feature will be useful for high-throughput calculations of optical properties at finite temperature and for calculating temperature-dependent optical properties using high-level theories such as G W and Bethe-Salpeter approaches.
This NODA requests public comment on two alternative allocation methodologies for existing units, on the unit-level allocations calculated using those alternative methodologies, on the data supporting the calculations, and on any resulting implications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...
Web-4D-QSAR: A web-based application to generate 4D-QSAR descriptors.
Ataide Martins, João Paulo; Rougeth de Oliveira, Marco Antônio; Oliveira de Queiroz, Mário Sérgio
2018-06-05
A web-based application is developed to generate 4D-QSAR descriptors using the LQTA-QSAR methodology, based on molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package. The LQTAGrid module calculates the intermolecular interaction energies at each grid point, considering probes and all aligned conformations resulting from MD simulations. These interaction energies are the independent variables or descriptors employed in a QSAR analysis. A friendly front end web interface, built using the Django framework and Python programming language, integrates all steps of the LQTA-QSAR methodology in a way that is transparent to the user, and in the backend, GROMACS and LQTAGrid are executed to generate 4D-QSAR descriptors to be used later in the process of QSAR model building. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
A fluctuating quantum model of the CO vibration in carboxyhemoglobin.
Falvo, Cyril; Meier, Christoph
2011-06-07
In this paper, we present a theoretical approach to construct a fluctuating quantum model of the CO vibration in heme-CO proteins and its interaction with external laser fields. The methodology consists of mixed quantum-classical calculations for a restricted number of snapshots, which are then used to construct a parametrized quantum model. As an example, we calculate the infrared absorption spectrum of carboxy-hemoglobin, based on a simplified protein model, and found the absorption linewidth in good agreement with the experimental results. © 2011 American Institute of Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Yuhua
2012-11-02
Since current technologies for capturing CO{sub 2} to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO{sub 2} reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO{sub 2} capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO{sub 2} sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculatedmore » thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO{sub 2} adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO{sub 2} capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO{sub 2} sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO{sub 2} capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we first introduce our screening methodology and the results on a testing set of solids with known thermodynamic properties to validate our methodology. Then, by applying our computational method to several different kinds of solid systems, we demonstrate that our methodology can predict the useful information to help developing CO{sub 2} capture Technologies.« less
Propellant Mass Fraction Calculation Methodology for Launch Vehicles
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.
Laboratory calibration of pyrgeometers with known spectral responsivities.
Gröbner, Julian; Los, Alexander
2007-10-20
A methodology is presented to calibrate pyrgeometers measuring atmospheric long-wave radiation, if their spectral dome transmission is known. The new calibration procedure is based on a black-body cavity to retrieve the sensitivity of the pyrgeometer, combined with calculated atmospheric long-wave spectra to determine a correction function in dependence of the integrated atmospheric water vapor to convert Planck radiation spectra to atmospheric long-wave spectra. The methodology was validated with two custom CG4 pyrgeometers with known dome transmissions by a comparison to the World Infrared Standard Group of Pyrgeometers at the World Radiation Center-Infrared Radiometry Section. The responses retrieved using the new laboratory calibration agree to within 1% with the responses determined by a comparison to the WISG, which is well within the uncertainties of both methodologies.
Health-Based Screening Levels and their Application to Water-Quality Data
Toccalino, Patricia L.; Zogorski, John S.; Norman, Julia E.
2005-01-01
To supplement existing Federal drinking-water standards and guidelines, thereby providing a basis for a more comprehensive evaluation of contaminant-occurrence data in a human-health context, USGS began a collaborative project in 1998 with USEPA, the New Jersey Department of Environmental Protection, and the Oregon Health & Science University to calculate non-enforceable health-based screening levels. Screening levels were calculated for contaminants that do not have Maximum Contaminant Level values using a consensus approach that entailed (1) standard USEPA Office of Water methodologies (equations) for establishing Lifetime Health Advisory (LHA) and Risk-Specific Dose (RSD) values for the protection of human health, and (2) existing USEPA human-health toxicity information.
System cost/performance analysis (study 2.3). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Kazangey, T.
1973-01-01
The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.
Hu, Wei; Lin, Lin; Yang, Chao
2015-12-21
With the help of our recently developed massively parallel DGDFT (Discontinuous Galerkin Density Functional Theory) methodology, we perform large-scale Kohn-Sham density functional theory calculations on phosphorene nanoribbons with armchair edges (ACPNRs) containing a few thousands to ten thousand atoms. The use of DGDFT allows us to systematically achieve a conventional plane wave basis set type of accuracy, but with a much smaller number (about 15) of adaptive local basis (ALB) functions per atom for this system. The relatively small number of degrees of freedom required to represent the Kohn-Sham Hamiltonian, together with the use of the pole expansion the selected inversion (PEXSI) technique that circumvents the need to diagonalize the Hamiltonian, results in a highly efficient and scalable computational scheme for analyzing the electronic structures of ACPNRs as well as their dynamics. The total wall clock time for calculating the electronic structures of large-scale ACPNRs containing 1080-10,800 atoms is only 10-25 s per self-consistent field (SCF) iteration, with accuracy fully comparable to that obtained from conventional planewave DFT calculations. For the ACPNR system, we observe that the DGDFT methodology can scale to 5000-50,000 processors. We use DGDFT based ab initio molecular dynamics (AIMD) calculations to study the thermodynamic stability of ACPNRs. Our calculations reveal that a 2 × 1 edge reconstruction appears in ACPNRs at room temperature.
2011-01-01
Background It was still unclear whether the methodological reporting quality of randomized controlled trials (RCTs) in major hepato-gastroenterology journals improved after the Consolidated Standards of Reporting Trials (CONSORT) Statement was revised in 2001. Methods RCTs in five major hepato-gastroenterology journals published in 1998 or 2008 were retrieved from MEDLINE using a high sensitivity search method and their reporting quality of methodological details were evaluated based on the CONSORT Statement and Cochrane Handbook for Systematic Reviews of interventions. Changes of the methodological reporting quality between 2008 and 1998 were calculated by risk ratios with 95% confidence intervals. Results A total of 107 RCTs published in 2008 and 99 RCTs published in 1998 were found. Compared to those in 1998, the proportion of RCTs that reported sequence generation (RR, 5.70; 95%CI 3.11-10.42), allocation concealment (RR, 4.08; 95%CI 2.25-7.39), sample size calculation (RR, 3.83; 95%CI 2.10-6.98), incomplete outecome data addressed (RR, 1.81; 95%CI, 1.03-3.17), intention-to-treat analyses (RR, 3.04; 95%CI 1.72-5.39) increased in 2008. Blinding and intent-to-treat analysis were reported better in multi-center trials than in single-center trials. The reporting of allocation concealment and blinding were better in industry-sponsored trials than in public-funded trials. Compared with historical studies, the methodological reporting quality improved with time. Conclusion Although the reporting of several important methodological aspects improved in 2008 compared with those published in 1998, which may indicate the researchers had increased awareness of and compliance with the revised CONSORT statement, some items were still reported badly. There is much room for future improvement. PMID:21801429
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-29
... Liquidity Factor of CME's CDS Margin Methodology April 23, 2013. Pursuant to Section 19(b)(1) of the... additions; bracketed text indicates deletions. * * * * * CME CDS Liquidity Margin Factor Calculation Methodology The Liquidity Factor will be calculated as the sum of two components: (1) A concentration charge...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... Liquidity Factor of CME's CDS Margin Methodology December 21, 2012. Pursuant to Section 19(b)(1) of the.... * * * * * CME CDS Liquidity Margin Factor Calculation Methodology The Liquidity Factor will be calculated as the... Liquidity Factor using the current Gross Notional Function with the following modifications: (1) the...
ERIC Educational Resources Information Center
Rickard, Andrew
2006-01-01
Event tourism is accompanied by social, economic and environmental benefits and costs. The assessment of this form of tourism has however largely focused on the social and economic perspectives, while environmental assessments have been bound to a destination-based approach. The application of the Ecological Footprint methodology allows for these…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-27
..., CBOE calculates the CBOE Gold ETF Volatility Index (``GVZ''), which is based on the VIX methodology applied to options on the SPDR Gold Trust (``GLD''). The current filing would permit $0.50 strike price... other exchange-traded fund (``ETF'') options. See Rule 903, Commentary .05 Volatility indexes are...
Reaction-to-fire testing and modeling for wood products
Mark A. Dietenberger; Robert H. White
2001-01-01
In this review we primarily discuss our use of the oxygen consumption calorimeter (ASTM E1354 for cone calorimeter and ISO9705 for room/corner tests) and fire growth modeling to evaluate treated wood products. With recent development towards performance-based building codes, new methodology requires engineering calculations of various fire growth scenarios. The initial...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-03
... part of the office-based and ancillary radiology payment methodology. This notice updates the CY 2010... covered ancillary radiology services to the lesser of the ASC rate or the amount calculated by multiplying... procedures and covered ancillary radiology services are determined using the amounts in the MPFS final rule...
Creep force modelling for rail traction vehicles based on the Fastsim algorithm
NASA Astrophysics Data System (ADS)
Spiryagin, Maksym; Polach, Oldrich; Cole, Colin
2013-11-01
The evaluation of creep forces is a complex task and their calculation is a time-consuming process for multibody simulation (MBS). A methodology of creep forces modelling at large traction creepages has been proposed by Polach [Creep forces in simulations of traction vehicles running on adhesion limit. Wear. 2005;258:992-1000; Influence of locomotive tractive effort on the forces between wheel and rail. Veh Syst Dyn. 2001(Suppl);35:7-22] adapting his previously published algorithm [Polach O. A fast wheel-rail forces calculation computer code. Veh Syst Dyn. 1999(Suppl);33:728-739]. The most common method for creep force modelling used by software packages for MBS of running dynamics is the Fastsim algorithm by Kalker [A fast algorithm for the simplified theory of rolling contact. Veh Syst Dyn. 1982;11:1-13]. However, the Fastsim code has some limitations which do not allow modelling the creep force - creep characteristic in agreement with measurements for locomotives and other high-power traction vehicles, mainly for large traction creep at low-adhesion conditions. This paper describes a newly developed methodology based on a variable contact flexibility increasing with the ratio of the slip area to the area of adhesion. This variable contact flexibility is introduced in a modification of Kalker's code Fastsim by replacing the constant Kalker's reduction factor, widely used in MBS, by a variable reduction factor together with a slip-velocity-dependent friction coefficient decreasing with increasing global creepage. The proposed methodology is presented in this work and compared with measurements for different locomotives. The modification allows use of the well recognised Fastsim code for simulation of creep forces at large creepages in agreement with measurements without modifying the proven modelling methodology at small creepages.
Risk-based Methodology for Validation of Pharmaceutical Batch Processes.
Wiles, Frederick
2013-01-01
In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.
District Heating Systems Performance Analyses. Heat Energy Tariff
NASA Astrophysics Data System (ADS)
Ziemele, Jelena; Vigants, Girts; Vitolins, Valdis; Blumberga, Dagnija; Veidenbergs, Ivars
2014-12-01
The paper addresses an important element of the European energy sector: the evaluation of district heating (DH) system operations from the standpoint of increasing energy efficiency and increasing the use of renewable energy resources. This has been done by developing a new methodology for the evaluation of the heat tariff. The paper presents an algorithm of this methodology, which includes not only a data base and calculation equation systems, but also an integrated multi-criteria analysis module using MADM/MCDM (Multi-Attribute Decision Making / Multi-Criteria Decision Making) based on TOPSIS (Technique for Order Performance by Similarity to Ideal Solution). The results of the multi-criteria analysis are used to set the tariff benchmarks. The evaluation methodology has been tested for Latvian heat tariffs, and the obtained results show that only half of heating companies reach a benchmark value equal to 0.5 for the efficiency closeness to the ideal solution indicator. This means that the proposed evaluation methodology would not only allow companies to determine how they perform with regard to the proposed benchmark, but also to identify their need to restructure so that they may reach the level of a low-carbon business.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
Heberling, Matthew T; Hopton, Matthew E
2012-11-30
This paper introduces a collection of four articles describing the San Luis Basin Sustainability Metrics Project. The Project developed a methodology for evaluating regional sustainability. This introduction provides the necessary background information for the project, description of the region, overview of the methods, and summary of the results. Although there are a multitude of scientifically based sustainability metrics, many are data intensive, difficult to calculate, and fail to capture all aspects of a system. We wanted to see if we could develop an approach that decision-makers could use to understand if their system was moving toward or away from sustainability. The goal was to produce a scientifically defensible, but straightforward and inexpensive methodology to measure and monitor environmental quality within a regional system. We initiated an interdisciplinary pilot project in the San Luis Basin, south-central Colorado, to test the methodology. The objectives were: 1) determine the applicability of using existing datasets to estimate metrics of sustainability at a regional scale; 2) calculate metrics through time from 1980 to 2005; and 3) compare and contrast the results to determine if the system was moving toward or away from sustainability. The sustainability metrics, chosen to represent major components of the system, were: 1) Ecological Footprint to capture the impact and human burden on the system; 2) Green Net Regional Product to represent economic welfare; 3) Emergy to capture the quality-normalized flow of energy through the system; and 4) Fisher information to capture the overall dynamic order and to look for possible regime changes. The methodology, data, and results of each metric are presented in the remaining four papers of the special collection. Based on the results of each metric and our criteria for understanding the sustainability trends, we find that the San Luis Basin is moving away from sustainability. Although we understand there are strengths and limitations of the methodology, we argue that each metric identifies changes to major components of the system. Published by Elsevier Ltd.
RESRAD for Radiological Risk Assessment. Comparison with EPA CERCLA Tools - PRG and DCC Calculators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Cheng, J. -J.; Kamboj, S.
The purpose of this report is two-fold. First, the risk assessment methodology for both RESRAD and the EPA’s tools is reviewed. This includes a review of the EPA’s justification for 2 using a dose-to-risk conversion factor to reduce the dose-based protective ARAR from 15 to 12 mrem/yr. Second, the models and parameters used in RESRAD and the EPA PRG and DCC Calculators are compared in detail, and the results are summarized and discussed. Although there are suites of software tools in the RESRAD family of codes and the EPA Calculators, the scope of this report is limited to the RESRADmore » (onsite) code for soil contamination and the EPA’s PRG and DCC Calculators also for soil contamination.« less
NASA Astrophysics Data System (ADS)
Athanasiou, Christina; Vasilakaki, Sofia; Dellis, Dimitris; Cournia, Zoe
2018-01-01
Computer-aided drug design has become an integral part of drug discovery and development in the pharmaceutical and biotechnology industry, and is nowadays extensively used in the lead identification and lead optimization phases. The drug design data resource (D3R) organizes challenges against blinded experimental data to prospectively test computational methodologies as an opportunity for improved methods and algorithms to emerge. We participated in Grand Challenge 2 to predict the crystallographic poses of 36 Farnesoid X Receptor (FXR)-bound ligands and the relative binding affinities for two designated subsets of 18 and 15 FXR-bound ligands. Here, we present our methodology for pose and affinity predictions and its evaluation after the release of the experimental data. For predicting the crystallographic poses, we used docking and physics-based pose prediction methods guided by the binding poses of native ligands. For FXR ligands with known chemotypes in the PDB, we accurately predicted their binding modes, while for those with unknown chemotypes the predictions were more challenging. Our group ranked #1st (based on the median RMSD) out of 46 groups, which submitted complete entries for the binding pose prediction challenge. For the relative binding affinity prediction challenge, we performed free energy perturbation (FEP) calculations coupled with molecular dynamics (MD) simulations. FEP/MD calculations displayed a high success rate in identifying compounds with better or worse binding affinity than the reference (parent) compound. Our studies suggest that when ligands with chemical precedent are available in the literature, binding pose predictions using docking and physics-based methods are reliable; however, predictions are challenging for ligands with completely unknown chemotypes. We also show that FEP/MD calculations hold predictive value and can nowadays be used in a high throughput mode in a lead optimization project provided that crystal structures of sufficiently high quality are available.
Scanlon, Kelly A; Gray, George M; Francis, Royce A; Lloyd, Shannon M; LaPuma, Peter
2013-03-06
Life cycle assessment (LCA) is a systems-based method used to determine potential impacts to the environment associated with a product throughout its life cycle. Conclusions from LCA studies can be applied to support decisions regarding product design or public policy, therefore, all relevant inputs (e.g., raw materials, energy) and outputs (e.g., emissions, waste) to the product system should be evaluated to estimate impacts. Currently, work-related impacts are not routinely considered in LCA. The objectives of this paper are: 1) introduce the work environment disability-adjusted life year (WE-DALY), one portion of a characterization factor used to express the magnitude of impacts to human health attributable to work-related exposures to workplace hazards; 2) outline the methods for calculating the WE-DALY; 3) demonstrate the calculation; and 4) highlight strengths and weaknesses of the methodological approach. The concept of the WE-DALY and the methodological approach to its calculation is grounded in the World Health Organization's disability-adjusted life year (DALY). Like the DALY, the WE-DALY equation considers the years of life lost due to premature mortality and the years of life lived with disability outcomes to estimate the total number of years of healthy life lost in a population. The equation requires input in the form of the number of fatal and nonfatal injuries and illnesses that occur in the industries relevant to the product system evaluated in the LCA study, the age of the worker at the time of the fatal or nonfatal injury or illness, the severity of the injury or illness, and the duration of time lived with the outcomes of the injury or illness. The methodological approach for the WE-DALY requires data from various sources, multi-step instructions to determine each variable used in the WE-DALY equation, and assumptions based on professional opinion. Results support the use of the WE-DALY in a characterization factor in LCA. Integrating occupational health into LCA studies will provide opportunities to prevent shifting of impacts between the work environment and the environment external to the workplace and co-optimize human health, to include worker health, and environmental health.
Meijster, Tim; van Duuren-Stuurman, Birgit; Heederik, Dick; Houba, Remko; Koningsveld, Ernst; Warren, Nicholas; Tielemans, Erik
2011-10-01
Use of cost-benefit analysis in occupational health increases insight into the intervention strategy that maximises the cost-benefit ratio. This study presents a methodological framework identifying the most important elements of a cost-benefit analysis for occupational health settings. One of the main aims of the methodology is to evaluate cost-benefit ratios for different stakeholders (employers, employees and society). The developed methodology was applied to two intervention strategies focused on reducing respiratory diseases. A cost-benefit framework was developed and used to set up a calculation spreadsheet containing the inputs and algorithms required to calculate the costs and benefits for all cost elements. Inputs from a large variety of sources were used to calculate total costs, total benefits, net costs and the benefit-to-costs ratio for both intervention scenarios. Implementation of a covenant intervention program resulted in a net benefit of €16 848 546 over 20 years for a population of 10 000 workers. Implementation was cost-effective for all stakeholders. For a health surveillance scenario, total benefits resulting from a decreased disease burden were estimated to be €44 659 352. The costs of the interventions could not be calculated. This study provides important insights for developing effective intervention strategies in the field of occupational medicine. Use of a model based approach enables investigation of those parameters most likely to impact on the effectiveness and costs of interventions for work related diseases. Our case study highlights the importance of considering different perspectives (of employers, society and employees) in assessing and sharing the costs and benefits of interventions.
Maurer, Marina J M; Schellekens, Reinout C A; Wutzke, Klaus D; Stellaard, Frans
2013-01-01
This paper describes various methodological aspects that were encountered during the development of a system to monitor the in vivo behaviour of a newly developed colon delivery device that enables oral drug treatment of inflammatory bowel diseases. [(13)C]urea was chosen as the marker substance. Release of [(13)C]urea in the ileocolonic region is proven by the exhalation of (13)CO2 in breath due to bacterial fermentation of [(13)C]urea. The (13)CO2 exhalation kinetics allows the calculation of a lag time as marker for delay of release, a pulse time as marker for the speed of drug release and the fraction of the dose that is fermented. To determine the total bioavailability, also the fraction of the dose absorbed from the intestine must be quantified. Initially, this was done by calculating the time-dependent [(13)C]urea appearance in the body urea pool via measurement of (13)C abundance and concentration of plasma urea. Thereafter, a new methodology was successfully developed to obtain the bioavailability data by measurement of the urinary excretion rate of [(13)C]urea. These techniques required two experimental days, one to test the coated device, another to test the uncoated device to obtain reference values for the situation that 100 % of [(13)C]urea is absorbed. This is hampered by large day-to-day variations in urea metabolism. Finally, a completely non-invasive, one-day test was worked out based on a dual isotope approach applying a simultaneous administration of [(13)C]urea in a coated device and [(15)N2]urea in an uncoated device. All aspects of isotope-related analytical methodologies and required calculation and correction systems are described.
Carbon dioxide fluid-flow modeling and injectivity calculations
Burke, Lauri
2011-01-01
These results were used to classify subsurface formations into three permeability classifications for the probabilistic calculations of storage efficiency and containment risk of the U.S. Geological Survey geologic carbon sequestration assessment methodology. This methodology is currently in use to determine the total carbon dioxide containment capacity of the onshore and State waters areas of the United States.
Information Gain Based Dimensionality Selection for Classifying Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Miles McQueen
2013-06-01
Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less
Global Artificial Boundary Conditions for Computation of External Flow Problems with Propulsive Jets
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon; Abarbanel, Saul; Nordstrom, Jan; Ryabenkii, Viktor; Vatsa, Veer
1998-01-01
We propose new global artificial boundary conditions (ABC's) for computation of flows with propulsive jets. The algorithm is based on application of the difference potentials method (DPM). Previously, similar boundary conditions have been implemented for calculation of external compressible viscous flows around finite bodies. The proposed modification substantially extends the applicability range of the DPM-based algorithm. In the paper, we present the general formulation of the problem, describe our numerical methodology, and discuss the corresponding computational results. The particular configuration that we analyze is a slender three-dimensional body with boat-tail geometry and supersonic jet exhaust in a subsonic external flow under zero angle of attack. Similarly to the results obtained earlier for the flows around airfoils and wings, current results for the jet flow case corroborate the superiority of the DPM-based ABC's over standard local methodologies from the standpoints of accuracy, overall numerical performance, and robustness.
A distorted-wave methodology for electron-ion impact excitation - Calculation for two-electron ions
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Temkin, A.
1977-01-01
A distorted-wave program is being developed for calculating the excitation of few-electron ions by electron impact. It uses the exchange approximation to represent the exact initial-state wavefunction in the T-matrix expression for the excitation amplitude. The program has been implemented for excitation of the 2/1,3/(S,P) states of two-electron ions. Some of the astrophysical applications of these cross sections as well as the motivation and requirements of the calculational methodology are discussed.
NASA Astrophysics Data System (ADS)
Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang
2018-07-01
A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.
Graph-based linear scaling electronic structure theory.
Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Graph-based linear scaling electronic structure theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.
2016-06-21
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Ab initio modeling of complex amorphous transition-metal-based ceramics.
Houska, J; Kos, S
2011-01-19
Binary and ternary amorphous transition metal (TM) nitrides and oxides are of great interest because of their suitability for diverse applications ranging from high-temperature machining to the production of optical filters or electrochromic devices. However, understanding of bonding in, and electronic structure of, these materials represents a challenge mainly due to the d electrons in their valence band. In the present work, we report ab initio calculations of the structure and electronic structure of ZrSiN materials. We focus on the methodology needed for the interpretation and automatic analysis of the bonding structure, on the effect of the length of the calculation on the convergence of individual quantities of interest and on the electronic structure of materials. We show that the traditional form of the Wannier function center-based algorithm fails due to the presence of d electrons in the valence band. We propose a modified algorithm, which allows one to analyze bonding structure in TM-based systems. We observe an appearance of valence p states of TM atoms in the electronic spectra of such systems (not only ZrSiN but also NbO(x) and WAuO), and examine the importance of the p states for the character of the bonding as well as for facilitating the bonding analysis. The results show both the physical phenomena and the computational methodology valid for a wide range of TM-based ceramics.
Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash
A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.
Allocation of nursing care hours in a combined ophthalmic nursing unit.
Navarro, V B; Stout, W A; Tolley, F M
1995-04-01
Traditional service configuration with separate nursing units for outpatient and inpatient care is becoming ineffective for new patient care delivery models. With the new configuration of a combined nursing unit, it was necessary to rethink traditional reporting methodologies and calculation of hours of care. This project management plan is an initial attempt to develop a standard costing/productivity model for a combined unit. The methodology developed from this plan measures nursing care hours for each patient population to determine the number of full time equivalents (FTEs) for a combined unit and allocates FTEs based on inpatient (IP), outpatient (OP), and emergency room (ER) volumes.
A spatial ammonia emission inventory for pig farming
NASA Astrophysics Data System (ADS)
Rebolledo, Boris; Gil, Antonia; Pallarés, Javier
2013-01-01
Atmospheric emissions of ammonia (NH3) from the agricultural sector have become a significant environmental and public concern as they have impacts on human health and ecosystems. This work proposes an improved methodology in order to identify administrative regions with high NH3 emissions from pig farming and calculates an ammonia density map (kg NH3-N ha-1), based on the number of pigs and available agricultural land, terrain slopes, groundwater bodies, soil permeability, zones sensitive to nitrate pollution and surface water buffer zones. The methodology has been used to construct a general tool for locating ammonia emissions from pig farming when detailed information of livestock farms is not available.
Wheel life prediction model - an alternative to the FASTSIM algorithm for RCF
NASA Astrophysics Data System (ADS)
Hossein-Nia, Saeed; Sichani, Matin Sh.; Stichel, Sebastian; Casanueva, Carlos
2018-07-01
In this article, a wheel life prediction model considering wear and rolling contact fatigue (RCF) is developed and applied to a heavy-haul locomotive. For wear calculations, a methodology based on Archard's wear calculation theory is used. The simulated wear depth is compared with profile measurements within 100,000 km. For RCF, a shakedown-based theory is applied locally, using the FaStrip algorithm to estimate the tangential stresses instead of FASTSIM. The differences between the two algorithms on damage prediction models are studied. The running distance between the two reprofiling due to RCF is estimated based on a Wöhler-like relationship developed from laboratory test results from the literature and the Palmgren-Miner rule. The simulated crack locations and their angles are compared with a five-year field study. Calculations to study the effects of electro-dynamic braking, track gauge, harder wheel material and the increase of axle load on the wheel life are also carried out.
Free Energy Perturbation Calculations of the Thermodynamics of Protein Side-Chain Mutations.
Steinbrecher, Thomas; Abel, Robert; Clark, Anthony; Friesner, Richard
2017-04-07
Protein side-chain mutation is fundamental both to natural evolutionary processes and to the engineering of protein therapeutics, which constitute an increasing fraction of important medications. Molecular simulation enables the prediction of the effects of mutation on properties such as binding affinity, secondary and tertiary structure, conformational dynamics, and thermal stability. A number of widely differing approaches have been applied to these predictions, including sequence-based algorithms, knowledge-based potential functions, and all-atom molecular mechanics calculations. Free energy perturbation theory, employing all-atom and explicit-solvent molecular dynamics simulations, is a rigorous physics-based approach for calculating thermodynamic effects of, for example, protein side-chain mutations. Over the past several years, we have initiated an investigation of the ability of our most recent free energy perturbation methodology to model the thermodynamics of protein mutation for two specific problems: protein-protein binding affinities and protein thermal stability. We highlight recent advances in the field and outline current and future challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.
Charpentier, R.R.; Klett, T.R.
2005-01-01
During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further studies of the relationship between geologic models of conventional and continuous resources will likely impact the respective resource assessment methodologies. ?? 2005 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Mert, A.
2016-12-01
The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.
NASA Astrophysics Data System (ADS)
Bekas, C.; Curioni, A.
2010-06-01
Enforcing the orthogonality of approximate wavefunctions becomes one of the dominant computational kernels in planewave based Density Functional Theory electronic structure calculations that involve thousands of atoms. In this context, algorithms that enjoy both excellent scalability and single processor performance properties are much needed. In this paper we present block versions of the Gram-Schmidt method and we show that they are excellent candidates for our purposes. We compare the new approach with the state of the art practice in planewave based calculations and find that it has much to offer, especially when applied on massively parallel supercomputers such as the IBM Blue Gene/P Supercomputer. The new method achieves excellent sustained performance that surpasses 73 TFLOPS (67% of peak) on 8 Blue Gene/P racks (32 768 compute cores), while it enables more than a two fold decrease in run time when compared with the best competing methodology.
Development and testing of a European Union-wide farm-level carbon calculator
Tuomisto, Hanna L; De Camillis, Camillo; Leip, Adrian; Nisini, Luigi; Pelletier, Nathan; Haastrup, Palle
2015-01-01
Direct greenhouse gas (GHG) emissions from agriculture accounted for approximately 10% of total European Union (EU) emissions in 2010. To reduce farming-related GHG emissions, appropriate policy measures and supporting tools for promoting low-C farming practices may be efficacious. This article presents the methodology and testing results of a new EU-wide, farm-level C footprint calculator. The Carbon Calculator quantifies GHG emissions based on international standards and technical specifications on Life Cycle Assessment (LCA) and C footprinting. The tool delivers its results both at the farm level and as allocated to up to 5 main products of the farm. In addition to the quantification of GHG emissions, the calculator proposes mitigation options and sequestration actions that may be suitable for individual farms. The results obtained during a survey made on 54 farms from 8 EU Member States are presented. These farms were selected in view of representing the diversity of farm types across different environmental zones in the EU. The results of the C footprint of products in the data set show wide range of variation between minimum and maximum values. The results of the mitigation actions showed that the tool can help identify practices that can lead to substantial emission reductions. To avoid burden-shifting from climate change to other environmental issues, the future improvements of the tool should include incorporation of other environmental impact categories in place of solely focusing on GHG emissions. Integr Environ Assess Manag 2015;11:404–416. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of SETAC. Key Points The methodology and testing results of a new European Union-wide, farm-level carbon calculator are presented. The Carbon Calculator reports life cycle assessment-based greenhouse gas emissions at farm and product levels and recommends farm- specific mitigation actions. Based on the results obtained from testing the tool in 54 farms in 8 European countries, it was found that the product-level carbon footprint results are comparable with those of other studies focusing on similar products. The results of the mitigation actions showed that the tool can help identify practices that can lead to substantial emission reductions. PMID:25655187
Methodological advances in unit cost calculation of psychiatric residential care in Spain.
Moreno, Karen; Sanchez, Eduardo; Salvador-Carulla, Luis
2008-06-01
The care of the severe mentally ill who need intensive support for their daily living (dependent persons), accounts for an increasingly large proportion of public expenditure in many European countries. The main aim of this study was the design and implementation of solid methodology to calculate unit costs of different types of care. To date, methodologies used in Spain have produced inaccurate figures, suggesting few variations in patient consumption of the same service. An adaptation of the Activity-Based-Costing methodology was applied in Navarre, a region in the North of Spain, as a pilot project for the public mental health services. A unit cost per care process was obtained for all levels of care considered in each service during 2005. The European Service Mapping Schedule (ESMS) codes were used to classify the services for later comparisons. Finally, in order to avoid problems of asymmetric cost distribution, a simple Bayesian model was used. As an illustration, we report the results obtained for long-term residential care and note that there are important variations between unit costs when considering different levels of care. Considering three levels of care (Level 1-low, Level 2-medium and Level 3-intensive), the cost per bed in Level 3 was 10% higher than that of Level 2. The results obtained using the cost methodology described provide more useful information than those using conventional methods, although its implementation requires much time to compile the necessary information during the initial stages and the collaboration of staff and managers working in the services. However, in some services, if no important variations exist in patient care, another method would be advisable, although our system provides very useful information about patterns of care from a clinical point of view. Detailed work is required at the beginning of the implementation in order to avoid the calculation of distorted figures and to improve the levels of decision making within the Health Care Service. IMPLICATIONS FOR HEALTH CARE POLICY AND FORMULATIONS: As other European countries, Spain has adopted a new care system for the dependent population. To finance this new system, reliable figures must be calculated for each type of user in order to establish tariffs or public prices. This study provides a useful management tool to assist in decision making. The methodology should be implemented in other regions of Spain and even in other countries in order to compare our results and validate the cost system designed.
Optical absorption spectra and g factor of MgO: Mn2+explored by ab initio and semi empirical methods
NASA Astrophysics Data System (ADS)
Andreici Eftimie, E.-L.; Avram, C. N.; Brik, M. G.; Avram, N. M.
2018-02-01
In this paper we present a methodology for calculations of the optical absorption spectra, ligand field parameters and g factor for the Mn2+ (3d5) ions doped in MgO host crystal. The proposed technique combines two methods: the ab initio multireference (MR) and the semi empirical ligand field (LF) in the framework of the exchange charge model (ECM) respectively. Both methods of calculations are applied to the [MnO6]10-cluster embedded in an extended point charge field of host matrix ligands based on Gellé-Lepetit procedure. The first step of such investigations was the full optimization of the cubic structure of perfect MgO crystal, followed by the structural optimization of the doped of MgO:Mn2+ system, using periodic density functional theory (DFT). The ab initio MR wave functions approaches, such as complete active space self-consistent field (CASSCF), N-electron valence second order perturbation theory (NEVPT2) and spectroscopy oriented configuration interaction (SORCI), are used for the calculations. The scalar relativistic effects have also been taken into account through the second order Douglas-Kroll-Hess (DKH2) procedure. Ab initio ligand field theory (AILFT) allows to extract all LF parameters and spin-orbit coupling constant from such calculations. In addition, the ECM of ligand field theory (LFT) has been used for modelling theoptical absorption spectra. The perturbation theory (PT) was employed for the g factor calculation in the semi empirical LFT. The results of each of the aforementioned types of calculations are discussed and the comparisons between the results obtained and the experimental results show a reasonable agreement, which justifies this new methodology based on the simultaneous use of both methods. This study establishes fundamental principles for the further modelling of larger embedded cluster models of doped metal oxides.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-22
... of production (COP) using annual-average, rather than quarterly, costs; and (3) defined the universe... calculation of ICDAS's COP and an explanation for the methodology used to determine the universe of U.S. sales..., 2009, ruling and determined that it was appropriate to: (1) Base ICDAS's universe of sales on entry...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
... calculates the CBOE Gold ETF Volatility Index (``GVZ''), which is based on the VIX methodology applied to options on the SPDR Gold Trust (``GLD''). The current filing would permit $0.50 strike price intervals for... exchange-traded fund (``ETF'') options. See Rule 1012, Commentary .05(a)(iv). To the extent that the CBOE...
Probabilistic-Based Modeling and Simulation Assessment
2010-06-01
developed to determine the relative importance of structural components of the vehicle under differnet crash and blast scenarios. With the integration of...the vehicle under different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the...parameter variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment
HRR Upgrade to mass loss calorimeter and modified Schlyter test for FR Wood
Mark A. Dietenberger; Charles R. Boardman
2013-01-01
Enhanced Heat Release Rate (HRR) methodology has been extended to the Mass Loss Calorimeter (MLC) and the Modified Schlyter flame spread test to evaluate fire retardant effectiveness used on wood based materials. Modifications to MLC include installation of thermopile on the chimney walls to correct systematic errors to the sensible HRR calculations to account for...
Extended cooperative control synthesis
NASA Technical Reports Server (NTRS)
Davidson, John B.; Schmidt, David K.
1994-01-01
This paper reports on research for extending the Cooperative Control Synthesis methodology to include a more accurate modeling of the pilot's controller dynamics. Cooperative Control Synthesis (CCS) is a methodology that addresses the problem of how to design control laws for piloted, high-order, multivariate systems and/or non-conventional dynamic configurations in the absence of flying qualities specifications. This is accomplished by emphasizing the parallel structure inherent in any pilot-controlled, augmented vehicle. The original CCS methodology is extended to include the Modified Optimal Control Model (MOCM), which is based upon the optimal control model of the human operator developed by Kleinman, Baron, and Levison in 1970. This model provides a modeling of the pilot's compensation dynamics that is more accurate than the simplified pilot dynamic representation currently in the CCS methodology. Inclusion of the MOCM into the CCS also enables the modeling of pilot-observation perception thresholds and pilot-observation attention allocation affects. This Extended Cooperative Control Synthesis (ECCS) allows for the direct calculation of pilot and system open- and closed-loop transfer functions in pole/zero form and is readily implemented in current software capable of analysis and design for dynamic systems. Example results based upon synthesizing an augmentation control law for an acceleration command system in a compensatory tracking task using the ECCS are compared with a similar synthesis performed by using the original CCS methodology. The ECCS is shown to provide augmentation control laws that yield more favorable, predicted closed-loop flying qualities and tracking performance than those synthesized using the original CCS methodology.
Peirlinck, Mathias; De Beule, Matthieu; Segers, Patrick; Rebelo, Nuno
2018-05-28
Patient-specific biomechanical modeling of the cardiovascular system is complicated by the presence of a physiological pressure load given that the imaged tissue is in a pre-stressed and -strained state. Neglect of this prestressed state into solid tissue mechanics models leads to erroneous metrics (e.g. wall deformation, peak stress, wall shear stress) which in their turn are used for device design choices, risk assessment (e.g. procedure, rupture) and surgery planning. It is thus of utmost importance to incorporate this deformed and loaded tissue state into the computational models, which implies solving an inverse problem (calculating an undeformed geometry given the load and the deformed geometry). Methodologies to solve this inverse problem can be categorized into iterative and direct methodologies, both having their inherent advantages and disadvantages. Direct methodologies are typically based on the inverse elastostatics (IE) approach and offer a computationally efficient single shot methodology to compute the in vivo stress state. However, cumbersome and problem-specific derivations of the formulations and non-trivial access to the finite element analysis (FEA) code, especially for commercial products, refrain a broad implementation of these methodologies. For that reason, we developed a novel, modular IE approach and implemented this methodology in a commercial FEA solver with minor user subroutine interventions. The accuracy of this methodology was demonstrated in an arterial tube and porcine biventricular myocardium model. The computational power and efficiency of the methodology was shown by computing the in vivo stress and strain state, and the corresponding unloaded geometry, for two models containing multiple interacting incompressible, anisotropic (fiber-embedded) and hyperelastic material behaviors: a patient-specific abdominal aortic aneurysm and a full 4-chamber heart model. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
Methodological reporting of randomized trials in five leading Chinese nursing journals.
Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu
2014-01-01
Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.
40 CFR 98.247 - Records that must be retained.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Tier 4 Calculation Methodology in § 98.37. (b) If you comply with the mass balance methodology in § 98... with § 98.243(c)(4). (2) Start and end times and calculated carbon contents for time periods when off... determining carbon content of feedstock or product. (3) A part of the monitoring plan required under § 98.3(g...
Serenity: A subsystem quantum chemistry program.
Unsleber, Jan P; Dresselhaus, Thomas; Klahr, Kevin; Schnieders, David; Böckers, Michael; Barton, Dennis; Neugebauer, Johannes
2018-05-15
We present the new quantum chemistry program Serenity. It implements a wide variety of functionalities with a focus on subsystem methodology. The modular code structure in combination with publicly available external tools and particular design concepts ensures extensibility and robustness with a focus on the needs of a subsystem program. Several important features of the program are exemplified with sample calculations with subsystem density-functional theory, potential reconstruction techniques, a projection-based embedding approach and combinations thereof with geometry optimization, semi-numerical frequency calculations and linear-response time-dependent density-functional theory. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Comparison of alternative weight recalibration methods for diagnosis-related groups
Rogowski, Jeannette Roskamp; Byrne, Daniel J.
1990-01-01
In this article, alternative methodologies for recalibration of the diagnosis-related group (DRG) weights are examined. Based on 1984 data, cost and charge-based weights are less congruent than those calculated with 1981 data. Previous studies using 1981 data demonstrated that cost- and charge-based weights were not very different. Charge weights result in higher payments to surgical DRGs and lower payments to medical DRGs, relative to cost weights. At the provider level, charge weights result in higher payments to large urban hospitals and teaching hospitals, relative to cost weights. PMID:10113568
Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico
NASA Astrophysics Data System (ADS)
Nathenson, M.; Fierstein, J.
2012-12-01
Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico Manuel Nathenson and Judy Fierstein U.S. Geological Survey, 345 Middlefield Road MS-910, Menlo Park, CA 94025 In a recent numerical simulation of tephra transport and deposition for the 1982 eruption, Bonasia et al. (2012) used masses for the tephra layers (A-1, B, and C) based on the volume data of Carey and Sigurdsson (1986) calculated by the methodology of Rose et al. (1973). For reasons not clear, using the same methodology we obtained volumes for layers A-1 and B much less than those previously reported. For example, for layer A-1, Carey and Sigurdsson (1986) reported a volume of 0.60 km3, whereas we obtain a volume of 0.23 km3. Moreover, applying the more recent methodology of tephra-volume calculation (Pyle, 1989; Fierstein and Nathenson, 1992) and using the isopachs maps in Carey and Sigurdsson (1986), we calculate a total tephra volume of 0.52 km3 (A-1, 0.135; B, 0.125; and C, 0.26 km3). In contrast, Carey and Sigurdsson (1986) report a much larger total volume of 2.19 km3. Such disagreement not only reflects the differing methodologies, but we propose that the volumes calculated with the methodology of Pyle and of Fierstein and Nathenson—involving the use of straight lines on a log thickness versus square root of area plot—better represent the actual fall deposits. After measuring the areas for the isomass contours for the HAZMAPP and FALL3D simulations in Bonasia et al. (2012), we applied the Pyle-Fierstein and Nathenson methodology to calculate the tephra masses deposited on the ground. These masses from five of the simulations range from 70% to 110% of those reported by Carey and Sigurdsson (1986), whereas that for layer B in the HAZMAP calculation is 160%. In the Bonasia et al. (2012) study, the mass erupted by the volcano is a critical input used in the simulation to produce an ash cloud that deposits tephra on the ground. Masses on the ground (as calculated by us) for five of the simulations range from 20% to 46% of the masses used as simulation inputs, whereas that for layer B in the HAZMAP calculation is 74%. It is not clear why the percentages are so variable, nor why the output volumes are such small percentages of the input erupted mass. From our volume calculations, the masses on the ground from the simulations are factors of 2.3 to 10 times what was actually deposited. Given this finding from our reevaluation of volumes, the simulations appear to overestimate the hazards from eruptions of sizes that occurred at El Chichón. Bonasia, R., A. Costa, A. Folch, G. Macedonio, and L. Capra, (2012), Numerical simulation of tephra transport and deposition of the 1982 El Chichón eruption and implications for hazard assessment, J. Volc. Geotherm. Res., 231-232, 39-49. Carey, S. and H. Sigurdsson, (1986), The 1982 eruptions of El Chichon volcano, Mexico: Observations and numerical modelling of tephra-fall distribution, Bull. Volcanol., 48, 127-141. Fierstein, J., and M. Nathenson, (1992), Another look at the calculation of fallout tephra volumes, Bull. Volcanol., 54, 156-167. Pyle, D.M., (1989), The thickness, volume and grainsize of tephra fall deposits, Bull. Volcanol., 51, 1-15. Rose, W.I., Jr., S. Bonis, R.E. Stoiber, M. Keller, and T. Bickford, (1973), Studies of volcanic ash from two recent Central American eruptions, Bull. Volcanol., 37, 338-364.
NASA Astrophysics Data System (ADS)
Alfano, M.; Bisagni, C.
2017-01-01
The objective of the running EU project DESICOS (New Robust DESign Guideline for Imperfection Sensitive COmposite Launcher Structures) is to formulate an improved shell design methodology in order to meet the demand of aerospace industry for lighter structures. Within the project, this article discusses the development of a probability-based methodology developed at Politecnico di Milano. It is based on the combination of the Stress-Strength Interference Method and the Latin Hypercube Method with the aim to predict the bucking response of three sandwich composite cylindrical shells, assuming a loading condition of pure compression. The three shells are made of the same material, but have different stacking sequence and geometric dimensions. One of them presents three circular cut-outs. Different types of input imperfections, treated as random variables, are taken into account independently and in combination: variability in longitudinal Young's modulus, ply misalignment, geometric imperfections, and boundary imperfections. The methodology enables a first assessment of the structural reliability of the shells through the calculation of a probabilistic buckling factor for a specified level of probability. The factor depends highly on the reliability level, on the number of adopted samples, and on the assumptions made in modeling the input imperfections. The main advantage of the developed procedure is the versatility, as it can be applied to the buckling analysis of laminated composite shells and sandwich composite shells including different types of imperfections.
Basic principles of respiratory function monitoring in ventilated newborns: A review.
Schmalisch, Gerd
2016-09-01
Respiratory monitoring during mechanical ventilation provides a real-time picture of patient-ventilator interaction and is a prerequisite for lung-protective ventilation. Nowadays, measurements of airflow, tidal volume and applied pressures are standard in neonatal ventilators. The measurement of lung volume during mechanical ventilation by tracer gas washout techniques is still under development. The clinical use of capnography, although well established in adults, has not been embraced by neonatologists because of technical and methodological problems in very small infants. While the ventilatory parameters are well defined, the calculation of other physiological parameters are based upon specific assumptions which are difficult to verify. Incomplete knowledge of the theoretical background of these calculations and their limitations can lead to incorrect interpretations with clinical consequences. Therefore, the aim of this review was to describe the basic principles and the underlying assumptions of currently used methods for respiratory function monitoring in ventilated newborns and to highlight methodological limitations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bi, Jian
2010-01-01
As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.
Imprecise (fuzzy) information in geostatistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardossy, A.; Bogardi, I.; Kelly, W.E.
1988-05-01
A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in amore » fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.« less
NASA Astrophysics Data System (ADS)
Merkisz, J.; Lijewski, P.; Fuc, P.; Siedlecki, M.; Ziolkowski, A.
2016-09-01
The paper analyzes the exhaust emissions from farm vehicles based on research performed under field conditions (RDE) according to the NTE procedure. This analysis has shown that it is hard to meet the NTE requirements under field conditions (engine operation in the NTE zone for at least 30 seconds). Due to a very high variability of the engine conditions, the share of a valid number of NTE windows in the field test is small throughout the entire test. For this reason, a modification of the measurement and exhaust emissions calculation methodology has been proposed for farm vehicles of the NRMM group. A test has been developed composed of the following phases: trip to the operation site (paved roads) and field operations (including u-turns and maneuvering). The range of the operation time share in individual test phases has been determined. A change in the method of calculating the real exhaust emissions has also been implemented in relation to the NTE procedure.
Stefanov, V T
2000-01-01
A methodology is introduced for numerical evaluation, with any given accuracy, of the cumulative probabilities of the proportion of genome shared identical by descent (IBD) on chromosome segments by two individuals in a grandparent-type relationship. Programs are provided in the popular software package Maple for rapidly implementing such evaluations in the cases of grandchild-grandparent and great-grandchild-great-grandparent relationships. Our results can be used to identify chromosomal segments that may contain disease genes. Also, exact P values in significance testing for resemblance of either a grandparent with a grandchild or a great-grandparent with a great-grandchild can be calculated. The genomic continuum model, with Haldane's model for the crossover process, is assumed. This is the model that has been used recently in the genetics literature devoted to IBD calculations. Our methodology is based on viewing the model as a special exponential family and elaborating on recent research results for such families. PMID:11063711
The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool
Stephen, Cook; Benjamin, Longo-Mbenza
2013-01-01
AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097
NASA Astrophysics Data System (ADS)
Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.
2018-07-01
Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.
Srinivasan, Srikant; Broderick, Scott R; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B; Saxena, Surendra K; LeBeau, James M; Rajan, Krishna
2015-12-18
A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.
Tsiakalos, Miltiadis F; Theodorou, Kiki; Kappas, Constantin; Zefkili, Sofia; Rosenwold, Jean-Claude
2004-04-01
It is well known that considerable underdosage can occur at the edges of a tumor inside the lung because of the degradation of penumbra due to lack of lateral electronic equilibrium. Although present even at smaller energies, this phenomenon is more pronounced for higher energies. Apart from Monte Carlo calculation, most of the existing Treatment Planning Systems (TPSs) cannot deal at all, or with acceptable accuracy, with this effect. A methodology has been developed for assessing the dose calculation algorithms in the lung region where lateral electronic disequilibrium exists, based on the Quality Index (QI) of the incident beam. A phantom, consisting of layers of polystyrene and lung material, has been irradiated using photon beams of 4, 6, 15, and 20 MV. The cross-plane profiles of each beam for 5x5, 10x10, and 25x10 fields have been measured at the middle of the phantom with the use of films. The penumbra (20%-80%) and fringe (50%-90%) enlargement was measured and the ratio of the widths for the lung to that of polystyrene was defined as the Correction Factor (CF). Monte Carlo calculations in the two phantoms have also been performed for energies of 6, 15, and 20 MV. Five commercial TPS's algorithms were tested for their ability to predict the penumbra and fringe enlargement. A linear relationship has been found between the QI of the beams and the CF of the penumbra and fringe enlargement for all the examined fields. Monte Carlo calculations agree very well (less than 1% difference) with the film measurements. The CF values range between 1.1 for 4 MV (QI 0.620) and 2.28 for 20 MV (QI 0.794). Three of the tested TPS's algorithms could not predict any enlargement at all for all energies and all fields and two of them could predict the penumbra enlargement to some extent. The proposed methodology can help any user or developer to check the accuracy of its algorithm for lung cases, based on a simple phantom geometry and the QI of the incident beam. This check is very important especially when higher energies are used, as the inaccuracies in existing algorithms can lead to an incorrect choice of energy for lung treatment and consequently to a failure in tumor control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salata, C; David, M; Rosado, P
Purpose: Use the methodology developed by the National Research Council Canada (NRC), for Fricke Dosimetry, to determine the G-value used at Ir-192 energies. Methods: In this study the Radiology Science Laboratory of Rio de Janeiro State University (LCR),based the G-value determination on the NRC method, using polyethylene bags. Briefly, this method consists of interpolating the G-values calculated for Co-60 and 250 kV x-rays for the average energy of Ir-192 (380 keV). As the Co-60 G-value is well described at literature, and associated with low uncertainties, it wasn’t measured in this present study. The G-values for 150 kV (Effective energy ofmore » 68 keV), 250 kV (Effective energy of 132 keV)and 300 kV(Effective energy of 159 keV)were calculated using the air kerma given by a calibrated ion chamber, and making it equivalent to the absorbed to the Fricke solution, using a Monte Carlo calculated factor for this conversion. Instead of interpolations, as described by the NRC, we displayed the G-values points in a graph, and used the line equation to determine the G- value for Ir-192 (380 keV). Results: The measured G-values were 1.436 ± 0.002 µmol/J for 150 kV, 1.472 ± 0.002 µmol/J for 250 kV, 1.497 ± 0.003 µmol/J for 300 kV. The used G-value for Co-60 (1.25 MeV) was 1,613 µmol/J. The R-square of the fitted regression line among those G-value points was 0.991. Using the line equation, the calculate G-value for 380 KeV was 1.542 µmol/J. Conclusion: The Result found for Ir-192 G-value is 3,1% different (lower) from the NRC value. But it agrees with previous literature results, using different methodologies to calculate this parameter. We will continue this experiment measuring the G-value for Co-60 in order to compare with the NRC method and better understand the reasons for the found differences.« less
Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites
NASA Technical Reports Server (NTRS)
Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.
2010-01-01
This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.
The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.
Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter
2012-08-07
: This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.
The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem
2012-01-01
This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications. PMID:22870956
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
A methodology for modeling barrier island storm-impact scenarios
Mickey, Rangley C.; Long, Joseph W.; Plant, Nathaniel G.; Thompson, David M.; Dalyander, P. Soupy
2017-02-16
A methodology for developing a representative set of storm scenarios based on historical wave buoy and tide gauge data for a region at the Chandeleur Islands, Louisiana, was developed by the U.S. Geological Survey. The total water level was calculated for a 10-year period and analyzed against existing topographic data to identify when storm-induced wave action would affect island morphology. These events were categorized on the basis of the threshold of total water level and duration to create a set of storm scenarios that were simulated, using a high-fidelity, process-based, morphologic evolution model, on an idealized digital elevation model of the Chandeleur Islands. The simulated morphological changes resulting from these scenarios provide a range of impacts that can help coastal managers determine resiliency of proposed or existing coastal structures and identify vulnerable areas within those structures.
Groundwater vulnerability and risk mapping using GIS, modeling and a fuzzy logic tool.
Nobre, R C M; Rotunno Filho, O C; Mansur, W J; Nobre, M M M; Cosenza, C A N
2007-12-07
A groundwater vulnerability and risk mapping assessment, based on a source-pathway-receptor approach, is presented for an urban coastal aquifer in northeastern Brazil. A modified version of the DRASTIC methodology was used to map the intrinsic and specific groundwater vulnerability of a 292 km(2) study area. A fuzzy hierarchy methodology was adopted to evaluate the potential contaminant source index, including diffuse and point sources. Numerical modeling was performed for delineation of well capture zones, using MODFLOW and MODPATH. The integration of these elements provided the mechanism to assess groundwater pollution risks and identify areas that must be prioritized in terms of groundwater monitoring and restriction on use. A groundwater quality index based on nitrate and chloride concentrations was calculated, which had a positive correlation with the specific vulnerability index.
Application of Steinberg vibration fatigue model for structural verification of space instruments
NASA Astrophysics Data System (ADS)
García, Andrés; Sorribes-Palmer, Félix; Alonso, Gustavo
2018-01-01
Electronic components in spaceships are subjected to vibration loads during the ascent phase of the launcher. It is important to verify by tests and analysis that all parts can survive in the most severe load cases. The purpose of this paper is to present the methodology and results of the application of the Steinberg's fatigue model to estimate the life of electronic components of the EPT-HET instrument for the Solar Orbiter space mission. A Nastran finite element model (FEM) of the EPT-HET instrument was created and used for the structural analysis. The methodology is based on the use of the FEM of the entire instrument to calculate the relative displacement RDSD and RMS values of the PCBs from random vibration analysis. These values are used to estimate the fatigue life of the most susceptible electronic components with the Steinberg's fatigue damage equation and the Miner's cumulative fatigue index. The estimations are calculated for two different configurations of the instrument and three different inputs in order to support the redesign process. Finally, these analytical results are contrasted with the inspections and the functional tests made after the vibration tests, concluding that this methodology can adequately predict the fatigue damage or survival of the electronic components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane
The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significantmore » funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.« less
Social Costs of Gambling in the Czech Republic 2012.
Winkler, Petr; Bejdová, Markéta; Csémy, Ladislav; Weissová, Aneta
2017-12-01
Evidence about social costs of gambling is scarce and the methodology for their calculation has been a subject to strong criticism. We aimed to estimate social costs of gambling in the Czech Republic 2012. This retrospective, prevalence based cost of illness study builds on the revised methodology of Australian Productivity Commission. Social costs of gambling were estimated by combining epidemiological and economic data. Prevalence data on negative consequences of gambling were taken from existing national epidemiological studies. Economic data were taken from various national and international sources. Consequences of problem and pathological gambling only were taken into account. In 2012, the social costs of gambling in the Czech Republic were estimated to range between 541,619 and 619,608 thousands EUR. While personal and family costs accounted for 63% of all social costs, direct medical costs were estimated to range from 0.25 to 0.28% of all social costs only. This is the first study which estimates social costs of gambling in any of the Central and East European countries. It builds upon the solid evidence about prevalence of gambling related problems in the Czech Republic and satisfactorily reliable economic data. However, there is a number of limitations stemming from assumptions that were made, which suggest that the methodology for the calculation of the social costs of gambling needs further development.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... Wage Rule revised the methodology by which the Department calculates the prevailing wages to be paid to... the Department calculates the prevailing wages to be paid to H-2B workers and United States (U.S... effect, it will supersede and make null the prevailing wage provisions at 20 CFR 655.10(b) of the...
NASA Astrophysics Data System (ADS)
Cheng, Liang; Xu, Hao; Li, Shuyi; Chen, Yanming; Zhang, Fangli; Li, Manchun
2018-04-01
As the rate of urbanization continues to accelerate, the utilization of solar energy in buildings plays an increasingly important role in sustainable urban development. For this purpose, we propose a LiDAR-based joint approach for calculating the solar irradiance incident on roofs and façades of buildings at city scale, which includes a methodology for calculating solar irradiance, the validation of the proposed method, and analysis of its application. The calculation of surface irradiance on buildings may then inform photovoltaic power generation simulations, architectural design, and urban energy planning. Application analyses of the proposed method in the experiment area found that: (1) Global and direct irradiations vary significantly by hour, day, month and season, both following the same trends; however, diffuse irradiance essentially remains unchanged over time. (2) Roof irradiation, but not façade irradiation, displays distinct time-dependent patterns. (3) Global and direct irradiations on roofs are highly correlated with roof aspect and slope, with high global and direct irradiations observed on roofs of aspect 100-250° and slopes of 0-60°, whereas diffuse irradiation on roofs is only affected by roof slope. (4) The façade of a building receives higher levels of global and direct irradiations if facing southeast, south, and southwest; however, diffuse irradiation remains constant regardless of façade orientation.
Benavides, A L; Aragones, J L; Vega, C
2016-03-28
The solubility of NaCl in water is evaluated by using three force field models: Joung-Cheatham for NaCl dissolved in two different water models (SPC/E and TIP4P/2005) and Smith Dang NaCl model in SPC/E water. The methodology based on free-energy calculations [E. Sanz and C. Vega, J. Chem. Phys. 126, 014507 (2007)] and [J. L. Aragones et al., J. Chem. Phys. 136, 244508 (2012)] has been used, except, that all calculations for the NaCl in solution were obtained by using molecular dynamics simulations with the GROMACS package instead of homemade MC programs. We have explored new lower molalities and made longer runs to improve the accuracy of the calculations. Exploring the low molality region allowed us to obtain an analytical expression for the chemical potential of the ions in solution as a function of molality valid for a wider range of molalities, including the infinite dilute case. These new results are in better agreement with recent estimations of the solubility obtained with other methodologies. Besides, two empirical simple rules have been obtained to have a rough estimate of the solubility of a certain model, by analyzing the ionic pairs formation as a function of molality and/or by calculating the difference between the NaCl solid chemical potential and the standard chemical potential of the salt in solution.
Characterization of a mine fire using atmospheric monitoring system sensor data.
Yuan, L; Thomas, R A; Zhou, L
2017-06-01
Atmospheric monitoring systems (AMS) have been widely used in underground coal mines in the United States for the detection of fire in the belt entry and the monitoring of other ventilation-related parameters such as airflow velocity and methane concentration in specific mine locations. In addition to an AMS being able to detect a mine fire, the AMS data have the potential to provide fire characteristic information such as fire growth - in terms of heat release rate - and exact fire location. Such information is critical in making decisions regarding fire-fighting strategies, underground personnel evacuation and optimal escape routes. In this study, a methodology was developed to calculate the fire heat release rate using AMS sensor data for carbon monoxide concentration, carbon dioxide concentration and airflow velocity based on the theory of heat and species transfer in ventilation airflow. Full-scale mine fire experiments were then conducted in the Pittsburgh Mining Research Division's Safety Research Coal Mine using an AMS with different fire sources. Sensor data collected from the experiments were used to calculate the heat release rates of the fires using this methodology. The calculated heat release rate was compared with the value determined from the mass loss rate of the combustible material using a digital load cell. The experimental results show that the heat release rate of a mine fire can be calculated using AMS sensor data with reasonable accuracy.
NASA Technical Reports Server (NTRS)
Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.
1997-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.
Proposal of a method for evaluating tsunami risk using response-surface methodology
NASA Astrophysics Data System (ADS)
Fukutani, Y.
2017-12-01
Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response-surface and Monte Carlo simulation without conducting multiple tsunami numerical simulations.
D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco
2016-02-01
Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miliordos, Evangelos; Xantheas, Sotiris S.
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding numbermore » using double differentiation in Cartesian coordinates. For molecules of C 1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm –1 from those obtained from Cartesian coordinates.« less
Grigorev, Yu I; Lyapina, N V
2014-01-01
The hygienic analysis of centralized drinking water supply in Tula region was performed. Priority contaminants of drinking water were established. On the base of the application of risk assessment methodology there was calculated carcinogenic risk for children's health. A direct relationship between certain classes of diseases and pollution of drinking water with chemical contaminants has been determined.
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.
A Methodology for Loading the Advanced Test Reactor Driver Core for Experiment Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cowherd, Wilson M.; Nielsen, Joseph W.; Choe, Dong O.
In support of experiments in the ATR, a new methodology was devised for loading the ATR Driver Core. This methodology will replace the existing methodology used by the INL Neutronic Analysis group to analyze experiments. Studied in this paper was the as-run analysis for ATR Cycle 152B, specifically comparing measured lobe powers and eigenvalue calculations.
Identifying the starting point of a spreading process in complex networks.
Comin, Cesar Henrique; Costa, Luciano da Fontoura
2011-11-01
When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.
Juncture flow improvement for wing/pylon configurations by using CFD methodology
NASA Technical Reports Server (NTRS)
Gea, Lie-Mine; Chyu, Wei J.; Stortz, Michael W.; Chow, Chuen-Yen
1993-01-01
Transonic flow field around a fighter wing/pylon configuration was simulated by using an implicit upwinding Navier-Stokes flow solver (F3D) and overset grid technology (Chimera). Flow separation and local shocks near the wing/pylon junction were observed in flight and predicted by numerical calculations. A new pylon/fairing shape was proposed to improve the flow quality. Based on numerical results, the size of separation area is significantly reduced and the onset of separation is delayed farther downstream. A smoother pressure gradient is also obtained near the junction area. This paper demonstrates that computational fluid dynamics (CFD) methodology can be used as a practical tool for aircraft design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spinella, Corrado; Bongiorno, Corrado; Nicotra, Giuseppe
2005-07-25
We present an analytical methodology, based on electron energy loss spectroscopy (EELS) and energy-filtered transmission electron microscopy, which allows us to quantify the clustered silicon concentration in annealed substoichiometric silicon oxide layers, deposited by plasma-enhanced chemical vapor deposition. The clustered Si volume fraction was deduced from a fit to the experimental EELS spectrum using a theoretical description proposed to calculate the dielectric function of a system of spherical particles of equal radii, located at random in a host material. The methodology allowed us to demonstrate that the clustered Si concentration is only one half of the excess Si concentration dissolvedmore » in the layer.« less
C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component
NASA Astrophysics Data System (ADS)
Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.
2018-06-01
The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.
User Evaluation of the NASA Technical Report Server Recommendation Service
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Bollen, Johan; Calhoun, JoAnne R.; Mackey, Calvin E.
2004-01-01
We present the user evaluation of two recommendation server methodologies implemented for the NASA Technical Report Server (NTRS). One methodology for generating recommendations uses log analysis to identify co-retrieval events on full-text documents. For comparison, we used the Vector Space Model (VSM) as the second methodology. We calculated cosine similarities and used the top 10 most similar documents (based on metadata) as recommendations . We then ran an experiment with NASA Langley Research Center (LaRC) staff members to gather their feedback on which method produced the most quality recommendations. We found that in most cases VSM outperformed log analysis of co-retrievals. However, analyzing the data revealed the evaluations may have been structurally biased in favor of the VSM generated recommendations. We explore some possible methods for combining log analysis and VSM generated recommendations and suggest areas of future work.
User Evaluation of the NASA Technical Report Server Recommendation Service
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Bollen, Johan; Calhoun, JoAnne R.; Mackey, Calvin E.
2004-01-01
We present the user evaluation of two recommendation server methodologies implemented for the NASA Technical Report Server (NTRS). One methodology for generating recommendations uses log analysis to identify co-retrieval events on full-text documents. For comparison, we used the Vector Space Model (VSM) as the second methodology. We calculated cosine similarities and used the top 10 most similar documents (based on metadata) as 'recommendations'. We then ran an experiment with NASA Langley Research Center (LaRC) staff members to gather their feedback on which method produced the most 'quality' recommendations. We found that in most cases VSM outperformed log analysis of co-retrievals. However, analyzing the data revealed the evaluations may have been structurally biased in favor of the VSM generated recommendations. We explore some possible methods for combining log analysis and VSM generated recommendations and suggest areas of future work.
Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das
2018-02-01
This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Semi-Empirical Prediction of Aircraft Low-Speed Aerodynamic Characteristics
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
This paper lays out a comprehensive methodology for computing a low-speed, high-lift polar, without requiring additional details about the aircraft design beyond what is typically available at the conceptual design stage. Introducing low-order, physics-based aerodynamic analyses allows the methodology to be more applicable to unconventional aircraft concepts than traditional, fully-empirical methods. The methodology uses empirical relationships for flap lift effectiveness, chord extension, drag-coefficient increment and maximum lift coefficient of various types of flap systems as a function of flap deflection, and combines these increments with the characteristics of the unflapped airfoils. Once the aerodynamic characteristics of the flapped sections are known, a vortex-lattice analysis calculates the three-dimensional lift, drag and moment coefficients of the whole aircraft configuration. This paper details the results of two validation cases: a supercritical airfoil model with several types of flaps; and a 12-foot, full-span aircraft model with slats and double-slotted flaps.
C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component
NASA Astrophysics Data System (ADS)
Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.
2018-02-01
The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.
NASA Astrophysics Data System (ADS)
Pathak, Ashish; Raessi, Mehdi
2016-04-01
We present a three-dimensional (3D) and fully Eulerian approach to capturing the interaction between two fluids and moving rigid structures by using the fictitious domain and volume-of-fluid (VOF) methods. The solid bodies can have arbitrarily complex geometry and can pierce the fluid-fluid interface, forming contact lines. The three-phase interfaces are resolved and reconstructed by using a VOF-based methodology. Then, a consistent scheme is employed for transporting mass and momentum, allowing for simulations of three-phase flows of large density ratios. The Eulerian approach significantly simplifies numerical resolution of the kinematics of rigid bodies of complex geometry and with six degrees of freedom. The fluid-structure interaction (FSI) is computed using the fictitious domain method. The methodology was developed in a message passing interface (MPI) parallel framework accelerated with graphics processing units (GPUs). The computationally intensive solution of the pressure Poisson equation is ported to GPUs, while the remaining calculations are performed on CPUs. The performance and accuracy of the methodology are assessed using an array of test cases, focusing individually on the flow solver and the FSI in surface-piercing configurations. Finally, an application of the proposed methodology in simulations of the ocean wave energy converters is presented.
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Kaemming, Thomas A.
2012-01-01
A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.
Shah, Tayyab Ikram; Milosavljevic, Stephan; Bath, Brenna
2017-06-01
This research is focused on methodological challenges and considerations associated with the estimation of the geographical aspects of access to healthcare with a focus on rural and remote areas. With the assumption that GIS-based accessibility measures for rural healthcare services will vary across geographic units of analysis and estimation techniques, which could influence the interpretation of spatial access to rural healthcare services. Estimations of geographical accessibility depend on variations of the following three parameters: 1) quality of input data; 2) accessibility method; and 3) geographical area. This research investigated the spatial distributions of physiotherapists (PTs) in comparison to family physicians (FPs) across Saskatchewan, Canada. The three-steps floating catchment areas (3SFCA) method was applied to calculate the accessibility scores for both PT and FP services at two different geographical units. A comparison of accessibility scores to simple healthcare provider-to-population ratios was also calculated. The results vary considerably depending on the accessibility methods used and the choice of geographical area unit for measuring geographical accessibility for both FP and PT services. These findings raise intriguing questions regarding the nature and extent of technical issues and methodological considerations that can affect GIS-based measures in health services research and planning. This study demonstrates how the selection of geographical areal units and different methods for measuring geographical accessibility could affect the distribution of healthcare resources in rural areas. These methodological issues have implications for determining where there is reduced access that will ultimately impact health human resource priorities and policies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rubin, Katrine Hass; Friis-Holmberg, Teresa; Hermann, Anne Pernille; Abrahamsen, Bo; Brixen, Kim
2013-08-01
A huge number of risk assessment tools have been developed. Far from all have been validated in external studies, more of them have absence of methodological and transparent evidence, and few are integrated in national guidelines. Therefore, we performed a systematic review to provide an overview of existing valid and reliable risk assessment tools for prediction of osteoporotic fractures. Additionally, we aimed to determine if the performance of each tool was sufficient for practical use, and last, to examine whether the complexity of the tools influenced their discriminative power. We searched PubMed, Embase, and Cochrane databases for papers and evaluated these with respect to methodological quality using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) checklist. A total of 48 tools were identified; 20 had been externally validated, however, only six tools had been tested more than once in a population-based setting with acceptable methodological quality. None of the tools performed consistently better than the others and simple tools (i.e., the Osteoporosis Self-assessment Tool [OST], Osteoporosis Risk Assessment Instrument [ORAI], and Garvan Fracture Risk Calculator [Garvan]) often did as well or better than more complex tools (i.e., Simple Calculated Risk Estimation Score [SCORE], WHO Fracture Risk Assessment Tool [FRAX], and Qfracture). No studies determined the effectiveness of tools in selecting patients for therapy and thus improving fracture outcomes. High-quality studies in randomized design with population-based cohorts with different case mixes are needed. Copyright © 2013 American Society for Bone and Mineral Research.
Doherty, Kathleen; Essajee, Shaffiq; Penazzato, Martina; Holmes, Charles; Resch, Stephen; Ciaranello, Andrea
2014-05-02
Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0-13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
Comparing the Energy Content of Batteries, Fuels, and Materials
ERIC Educational Resources Information Center
Balsara, Nitash P.; Newman, John
2013-01-01
A methodology for calculating the theoretical and practical specific energies of rechargeable batteries, fuels, and materials is presented. The methodology enables comparison of the energy content of diverse systems such as the lithium-ion battery, hydrocarbons, and ammonia. The methodology is relevant for evaluating the possibility of using…
Seismic low-frequency-based calculation of reservoir fluid mobility and its applications
NASA Astrophysics Data System (ADS)
Chen, Xue-Hua; He, Zhen-Hua; Zhu, Si-Xin; Liu, Wei; Zhong, Wen-Li
2012-06-01
Low frequency content of seismic signals contains information related to the reservoir fluid mobility. Based on the asymptotic analysis theory of frequency-dependent reflectivity from a fluid-saturated poroelastic medium, we derive the computational implementation of reservoir fluid mobility and present the determination of optimal frequency in the implementation. We then calculate the reservoir fluid mobility using the optimal frequency instantaneous spectra at the low-frequency end of the seismic spectrum. The methodology is applied to synthetic seismic data from a permeable gas-bearing reservoir model and real land and marine seismic data. The results demonstrate that the fluid mobility shows excellent quality in imaging the gas reservoirs. It is feasible to detect the location and spatial distribution of gas reservoirs and reduce the non-uniqueness and uncertainty in fluid identification.
Dyekjaer, Jane Dannow; Jónsdóttir, Svava Osk
2004-01-22
Quantitative Structure-Property Relationships (QSPR) have been developed for a series of monosaccharides, including the physical properties of partial molar heat capacity, heat of solution, melting point, heat of fusion, glass-transition temperature, and solid state density. The models were based on molecular descriptors obtained from molecular mechanics and quantum chemical calculations, combined with other types of descriptors. Saccharides exhibit a large degree of conformational flexibility, therefore a methodology for selecting the energetically most favorable conformers has been developed, and was used for the development of the QSPR models. In most cases good correlations were obtained for monosaccharides. For five of the properties predictions were made for disaccharides, and the predicted values for the partial molar heat capacities were in excellent agreement with experimental values.
Hagiwara, Yohsuke; Tateno, Masaru
2010-10-20
We review the recent research on the functional mechanisms of biological macromolecules using theoretical methodologies coupled to ab initio quantum mechanical (QM) treatments of reaction centers in proteins and nucleic acids. Since in most cases such biological molecules are large, the computational costs of performing ab initio calculations for the entire structures are prohibitive. Instead, simulations that are jointed with molecular mechanics (MM) calculations are crucial to evaluate the long-range electrostatic interactions, which significantly affect the electronic structures of biological macromolecules. Thus, we focus our attention on the methodologies/schemes and applications of jointed QM/MM calculations, and discuss the critical issues to be elucidated in biological macromolecular systems. © 2010 IOP Publishing Ltd
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahu, Nityananda; Gadre, Shridhar R., E-mail: gadre@iitk.ac.in
The present work reports the calculation of vibrational infrared (IR) and Raman spectra of large molecular systems employing molecular tailoring approach (MTA). Further, it extends the grafting procedure for the accurate evaluation of IR and Raman spectra of large molecular systems, employing a new methodology termed as Fragments-in-Fragments (FIF), within MTA. Unlike the previous MTA-based studies, the accurate estimation of the requisite molecular properties is achieved without performing any full calculations (FC). The basic idea of the grafting procedure is implemented by invoking the nearly basis-set-independent nature of the MTA-based error vis-à-vis the respective FCs. FIF has been tested outmore » for the estimation of the above molecular properties for three isomers, viz., β-strand, 3{sub 10}- and α-helix of acetyl(alanine){sub n}NH{sub 2} (n = 10, 15) polypeptides, three conformers of doubly protonated gramicidin S decapeptide and trpzip2 protein (PDB id: 1LE1), respectively, employing BP86/TZVP, M06/6-311G**, and M05-2X/6-31G** levels of theory. For most of the cases, a maximum difference of 3 cm{sup −1} is achieved between the grafted-MTA frequencies and the corresponding FC values. Further, a comparison of the BP86/TZVP level IR and Raman spectra of α-helical (alanine){sub 20} and its N-deuterated derivative shows an excellent agreement with the existing experimental spectra. In view of the requirement of only MTA-based calculations and the ability of FIF to work at any level of theory, the current methodology provides a cost-effective solution for obtaining accurate spectra of large molecular systems.« less
NASA Astrophysics Data System (ADS)
Schwietzke, S.; Petron, G.; Conley, S. A.; Karion, A.; Tans, P. P.; Wolter, S.; King, C. W.; White, A. B.; Coleman, T.; Bianco, L.; Schnell, R. C.
2016-12-01
Confidence in basin scale oil and gas industry related methane (CH4) emission estimates hinges on an in-depth understanding, objective evaluation, and continued improvements of both top-down (e.g. aircraft measurement based) and bottom-up (e.g. emission inventories using facility- and/or component-level measurements) approaches. Systematic discrepancies of CH4 emission estimates between both approaches in the literature have highlighted research gaps. This paper is part of a more comprehensive study to expand and improve this reconciliation effort for a US dry shale gas play. This presentation will focus on refinements of the aircraft mass balance method to reduce the number of potential methodological biases (e.g. data and methodology). The refinements include (i) an in-depth exploration of the definition of upwind conditions and their impact on calculated downwind CH4 enhancements and total CH4 emissions, (ii) taking into account small but non-zero vertical and horizontal wind gradients in the boundary layer, and (iii) characterizing the spatial distribution of CH4 emissions in the study area using aircraft measurements. For the first time to our knowledge, we apply the aircraft mass balance method to calculate spatially resolved total CH4 emissions for 10 km x 60 km sub-regions within the study area. We identify higher-emitting sub-regions and localize repeating emission patterns as well as differences between days. The increased resolution of the top-down calculation will for the first time allow for an in-depth comparison with a spatially and temporally resolved bottom-up emission estimate based on measurements, concurrent activity data and other data sources.
Calculation for simulation of archery goal value using a web camera and ultrasonic sensor
NASA Astrophysics Data System (ADS)
Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti
2017-08-01
Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.
A new approach to assessing the water footprint of wine: an Italian case study.
Lamastra, Lucrezia; Suciu, Nicoleta Alina; Novelli, Elisa; Trevisan, Marco
2014-08-15
Agriculture is the largest freshwater consumer, accounting for 70% of the world's water withdrawal. Water footprints (WFs) are being increasingly used to indicate the impacts of water use by production systems. A new methodology to assess WF of wine was developed in the framework of the V.I.V.A. project (Valutazione Impatto Viticoltura sull'Ambiente), launched by the Italian Ministry for the Environment in 2011 to improve the Italian wine sector's sustainability. The new methodology has been developed that enables different vines from the same winery to be compared. This was achieved by calculating the gray water footprint, following Tier III approach proposed by Hoekstra et al. (2011). The impact of water use during the life cycle of grape-wine production was assessed for six different wines from the same winery in Sicily, Italy using both the newly developed methodology (V.I.V.A.) and the classical methodology proposed by the Water Footprint Network (WFN). In all cases green water was the largest contributor to WF, but the new methodology also detected differences between vines of the same winery. Furthermore, V.I.V.A. methodology assesses water body contamination by pesticides application whereas the WFN methodology considers just fertilization. This fact ended highlights the highest WF of vineyard 4 calculated by V.I.V.A. if compared with the WF calculated with WFN methodology. Comparing the WF of wine produced with grapes from the six different wines, the factors most greatly influencing the results obtained in this study were: distance from the water body, fertilization rate, amount and eco-toxicological behavior of the active ingredients used. Copyright © 2014 Elsevier B.V. All rights reserved.
Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.
Adaptive real-time methodology for optimizing energy-efficient computing
Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA
2011-06-28
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
Performance analysis of an air drier for a liquid dehumidifier solar air conditioning system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Queiroz, A.G.; Orlando, A.F.; Saboya, F.E.M.
1988-05-01
A model was developed for calculating the operating conditions of a non-adiabatic liquid dehumidifier used in solar air conditioning systems. In the experimental facility used for obtaining the data, air and triethylene glycol circulate countercurrently outside staggered copper tubes which are the filling of an absorption tower. Water flows inside the copper tubes, thus cooling the whole system and increasing the mass transfer potential for drying air. The methodology for calculating the mass transfer coefficient is based on the Merkel integral approach, taking into account the lowering of the water vapor pressure in equilibrium with the water glycol solution.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Borotikar, Bhushan S.; Sheehan, Frances T.
2017-01-01
Objectives To establish an in vivo, normative patellofemoral cartilage contact mechanics database acquired during voluntary muscle control using a novel dynamic magnetic resonance (MR) imaging-based computational methodology and validate the contact mechanics sensitivity to the known sub-millimeter methodological inaccuracies. Design Dynamic cine phase-contrast and multi-plane cine images were acquired while female subjects (n=20, sample of convenience) performed an open kinetic chain (knee flexion-extension) exercise inside a 3-Tesla MR scanner. Static cartilage models were created from high resolution three-dimensional static MR data and accurately placed in their dynamic pose at each time frame based on the cine-PC data. Cartilage contact parameters were calculated based on the surface overlap. Statistical analysis was performed using paired t-test and a one-sample repeated measures ANOVA. The sensitivity of the contact parameters to the known errors in the patellofemoral kinematics was determined. Results Peak mean patellofemoral contact area was 228.7±173.6mm2 at 40° knee angle. During extension, contact centroid and peak strain locations tracked medially on the femoral and patellar cartilage and were not significantly different from each other. At 30°, 35°, and 40° of knee extension, contact area was significantly different. Contact area and centroid locations were insensitive to rotational and translational perturbations. Conclusion This study is a first step towards unfolding the biomechanical pathways to anterior patellofemoral pain and OA using dynamic, in vivo, and accurate methodologies. The database provides crucial data for future studies and for validation of, or as an input to, computational models. PMID:24012620
Roberts, L.N.; Biewick, L.R.
1999-01-01
This report documents a comparison of two methods of resource calculation that are being used in the National Coal Resource Assessment project of the U.S. Geological Survey (USGS). Tewalt (1998) discusses the history of using computer software packages such as GARNET (Graphic Analysis of Resources using Numerical Evaluation Techniques), GRASS (Geographic Resource Analysis Support System), and the vector-based geographic information system (GIS) ARC/INFO (ESRI, 1998) to calculate coal resources within the USGS. The study discussed here, compares resource calculations using ARC/INFO* (ESRI, 1998) and EarthVision (EV)* (Dynamic Graphics, Inc. 1997) for the coal-bearing John Henry Member of the Straight Cliffs Formation of Late Cretaceous age in the Kaiparowits Plateau of southern Utah. Coal resource estimates in the Kaiparowits Plateau using ARC/INFO are reported in Hettinger, and others, 1996.
An Evaluation of Aircraft Emissions Inventory Methodology by Comparisons with Reported Airline Data
NASA Technical Reports Server (NTRS)
Daggett, D. L.; Sutkus, D. J.; DuBois, D. P.; Baughcum, S. L.
1999-01-01
This report provides results of work done to evaluate the calculation methodology used in generating aircraft emissions inventories. Results from the inventory calculation methodology are compared to actual fuel consumption data. Results are also presented that show the sensitivity of calculated emissions to aircraft payload factors. Comparisons of departures made, ground track miles flown and total fuel consumed by selected air carriers were made between U.S. Dept. of Transportation (DOT) Form 41 data reported for 1992 and results of simplified aircraft emissions inventory calculations. These comparisons provide an indication of the magnitude of error that may be present in aircraft emissions inventories. To determine some of the factors responsible for the errors quantified in the DOT Form 41 analysis, a comparative study of in-flight fuel flow data for a specific operator's 747-400 fleet was conducted. Fuel consumption differences between the studied aircraft and the inventory calculation results may be attributable to several factors. Among these are longer flight times, greater actual aircraft weight and performance deterioration effects for the in-service aircraft. Results of a parametric study on the variation in fuel use and NOx emissions as a function of aircraft payload for different aircraft types are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathan, S.; Loftin, B.; Abramczyk, G.
The Small Gram Quantity (SGQ) concept is based on the understanding that small amounts of hazardous materials, in this case radioactive materials (RAM), are significantly less hazardous than large amounts of the same materials. This paper describes a methodology designed to estimate an SGQ for several neutron and gamma emitting isotopes that can be shipped in a package compliant with 10 CFR Part 71 external radiation level limits regulations. These regulations require packaging for the shipment of radioactive materials, under both normal and accident conditions, to perform the essential functions of material containment, subcriticality, and maintain external radiation levels withinmore » the specified limits. By placing the contents in a helium leak-tight containment vessel, and limiting the mass to ensure subcriticality, the first two essential functions are readily met. Some isotopes emit sufficiently strong photon radiation that small amounts of material can yield a large dose rate outside the package. Quantifying the dose rate for a proposed content is a challenging issue for the SGQ approach. It is essential to quantify external radiation levels from several common gamma and neutron sources that can be safely placed in a specific packaging, to ensure compliance with federal regulations. The Packaging Certification Program (PCP) Methodology for Determining Dose Rate for Small Gram Quantities in Shipping Packagings provides bounding shielding calculations that define mass limits compliant with 10 CFR 71.47 for a set of proposed SGQ isotopes. The approach is based on energy superposition with dose response calculated for a set of spectral groups for a baseline physical packaging configuration. The methodology includes using the MCNP radiation transport code to evaluate a family of neutron and photon spectral groups using the 9977 shipping package and its associated shielded containers as the base case. This results in a set of multipliers for 'dose per particle' for each spectral group. For a given isotope, the source spectrum is folded with the response for each group. The summed contribution from all isotopes determines the total dose from the RAM in the container.« less
An innovative methodology for measurement of stress distribution of inflatable membrane structures
NASA Astrophysics Data System (ADS)
Zhao, Bing; Chen, Wujun; Hu, Jianhui; Chen, Jianwen; Qiu, Zhenyu; Zhou, Jinyu; Gao, Chengjun
2016-02-01
The inflatable membrane structure has been widely used in the fields of civil building, industrial building, airship, super pressure balloon and spacecraft. It is important to measure the stress distribution of the inflatable membrane structure because it influences the safety of the structural design. This paper presents an innovative methodology for the measurement and determination of the stress distribution of the inflatable membrane structure under different internal pressures, combining photogrammetry and the force-finding method. The shape of the inflatable membrane structure is maintained by the use of pressurized air, and the internal pressure is controlled and measured by means of an automatic pressure control system. The 3D coordinates of the marking points pasted on the membrane surface are acquired by three photographs captured from three cameras based on photogrammetry. After digitizing the markings on the photographs, the 3D curved surfaces are rebuilt. The continuous membrane surfaces are discretized into quadrilateral mesh and simulated by membrane links to calculate the stress distributions using the force-finding method. The internal pressure is simplified to the external node forces in the normal direction according to the contributory area of the node. Once the geometry x, the external force r and the topology C are obtained, the unknown force densities q in each link can be determined. Therefore, the stress distributions of the inflatable membrane structure can be calculated, combining the linear adjustment theory and the force density method based on the force equilibrium of inflated internal pressure and membrane internal force without considering the mechanical properties of the constitutive material. As the use of the inflatable membrane structure is attractive in the field of civil building, an ethylene-tetrafluoroethylene (ETFE) cushion is used with the measurement model to validate the proposed methodology. The comparisons between the obtained results and numerical simulation for the inflation process of the ETFE cushion are performed, and the strong agreements demonstrate that the proposed methodology is feasible and accurate.
Development of 3D pseudo pin-by-pin calculation methodology in ANC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B.; Mayhue, L.; Huria, H.
2012-07-01
Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less
Methodological Reporting of Randomized Trials in Five Leading Chinese Nursing Journals
Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu
2014-01-01
Background Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. Methods In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. Results In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34±0.97 (Mean ± SD). No RCT reported descriptions and changes in “trial design,” changes in “outcomes” and “implementation,” or descriptions of the similarity of interventions for “blinding.” Poor reporting was found in detailing the “settings of participants” (13.1%), “type of randomization sequence generation” (1.8%), calculation methods of “sample size” (0.4%), explanation of any interim analyses and stopping guidelines for “sample size” (0.3%), “allocation concealment mechanism” (0.3%), additional analyses in “statistical methods” (2.1%), and targeted subjects and methods of “blinding” (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of “participants,” “interventions,” and definitions of the “outcomes” and “statistical methods.” The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. Conclusions The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods. PMID:25415382
1984-02-01
PERFORM FLOW, CAPITAL COST, CALL CALCI ENGINEERING AND OPERATING CALL CALC2 CALCULATIONS AND MAINTENANCE REPORTS PERFORM FINANCIAL CALL ECONM FINANCIAL...8217-, " : ’.:. _’t " .- - -,, . , . , . ’,L "- "e " .°,’,/’,,.’" ""./"" " - - , "."-" ". 9 -".3 "’, 9.2.5 Financial Analysis Routines ECONM serves as
Werner, Kent; Bosson, Emma; Berglund, Sten
2006-12-01
Safety assessment related to the siting of a geological repository for spent nuclear fuel deep in the bedrock requires identification of potential flow paths and the associated travel times for radionuclides originating at repository depth. Using the Laxemar candidate site in Sweden as a case study, this paper describes modeling methodology, data integration, and the resulting water flow models, focusing on the Quaternary deposits and the upper 150 m of the bedrock. Example simulations identify flow paths to groundwater discharge areas and flow paths in the surface system. The majority of the simulated groundwater flow paths end up in the main surface waters and along the coastline, even though the particles used to trace the flow paths are introduced with a uniform spatial distribution at a relatively shallow depth. The calculated groundwater travel time, determining the time available for decay and retention of radionuclides, is on average longer to the coastal bays than to other biosphere objects at the site. Further, it is demonstrated how GIS-based modeling can be used to limit the number of surface flow paths that need to be characterized for safety assessment. Based on the results, the paper discusses an approach for coupling the present models to a model for groundwater flow in the deep bedrock.
The actual citation impact of European oncological research.
López-Illescas, Carmen; de Moya-Anegón, Félix; Moed, Henk F
2008-01-01
This study provides an overview of the research performance of major European countries in the field Oncology, the most important journals in which they published their research articles, and the most important academic institutions publishing them. The analysis was based on Thomson Scientific's Web of Science (WoS) and calculated bibliometric indicators of publication activity and actual citation impact. Studying the time period 2000-2006, it gives an update of earlier studies, but at the same time it expands their methodologies, using a broader definition of the field, calculating indicators of actual citation impact, and analysing new and policy relevant aspects. Findings suggest that the emergence of Asian countries in the field Oncology has displaced European articles more strongly than articles from the USA; that oncologists who have published their articles in important, more general journals or in journals covering other specialties, rather than in their own specialist journals, have generated a relatively high actual citation impact; and that universities from Germany, and--to a lesser extent--those from Italy, the Netherlands, UK, and Sweden, dominate a ranking of European universities based on number of articles in oncology. The outcomes illustrate that different bibliometric methodologies may lead to different outcomes, and that outcomes should be interpreted with care.
NASA Astrophysics Data System (ADS)
Guissart, A.; Bernal, L. P.; Dimitriadis, G.; Terrapon, V. E.
2017-05-01
This work presents, compares and discusses results obtained with two indirect methods for the calculation of aerodynamic forces and pitching moment from 2D Particle Image Velocimetry (PIV) measurements. Both methodologies are based on the formulations of the momentum balance: the integral Navier-Stokes equations and the "flux equation" proposed by Noca et al. (J Fluids Struct 13(5):551-578, 1999), which has been extended to the computation of moments. The indirect methods are applied to spatio-temporal data for different separated flows around a plate with a 16:1 chord-to-thickness ratio. Experimental data are obtained in a water channel for both a plate undergoing a large amplitude imposed pitching motion and a static plate at high angle of attack. In addition to PIV data, direct measurements of aerodynamic loads are carried out to assess the quality of the indirect calculations. It is found that indirect methods are able to compute the mean and the temporal evolution of the loads for two-dimensional flows with a reasonable accuracy. Nonetheless, both methodologies are noise sensitive, and the parameters impacting the computation should thus be chosen carefully. It is also shown that results can be improved through the use of dynamic mode decomposition (DMD) as a pre-processing step.
Laso, Jara; Margallo, María; Fullana, Pére; Bala, Alba; Gazulla, Cristina; Irabien, Ángel; Aldaco, Rubén
2017-01-01
To be able to fulfil high market expectations for a number of practical applications, Environmental Product Declarations (EPDs) have to meet and comply with specific and strict methodological prerequisites. These expectations include the possibility to add up Life Cycle Assessment (LCA)-based information in the supply chain and to compare different EPDs. To achieve this goal, common and harmonized calculation rules have to be established, the so-called Product Category Rules (PCRs), which set the overall LCA calculation rules to create EPDs. This document provides PCRs for the assessment of the environmental performance of canned anchovies in Cantabria Region based on an Environmental Sustainability Assessment (ESA) method. This method uses two main variables: the natural resources sustainability (NRS) and the environmental burdens sustainability (EBS). To reduce the complexity of ESA and facilitate the decision-making process, all variables are normalized and weighted to obtain two global dimensionless indexes: resource consumption (X 1 ) and environmental burdens (X 2 ). •This paper sets the PCRs adapted to the Cantabrian canned anchovies.•ESA method facilitates the product comparison and the decision-making process.•This paper stablishes all the steps that an EPD should include within the PCRs of Cantabrian canned anchovies.
Ab Initio Crystal Field for Lanthanides.
Ungur, Liviu; Chibotaru, Liviu F
2017-03-13
An ab initio methodology for the first-principle derivation of crystal-field (CF) parameters for lanthanides is described. The methodology is applied to the analysis of CF parameters in [Tb(Pc) 2 ] - (Pc=phthalocyanine) and Dy 4 K 2 ([Dy 4 K 2 O(OtBu) 12 ]) complexes, and compared with often used approximate and model descriptions. It is found that the application of geometry symmetrization, and the use of electrostatic point-charge and phenomenological CF models, lead to unacceptably large deviations from predictions based on ab initio calculations for experimental geometry. It is shown how the predictions of standard CASSCF (Complete Active Space Self-Consistent Field) calculations (with 4f orbitals in the active space) can be systematically improved by including effects of dynamical electronic correlation (CASPT2 step) and by admixing electronic configurations of the 5d shell. This is exemplified for the well-studied Er-trensal complex (H 3 trensal=2,2',2"-tris(salicylideneimido)trimethylamine). The electrostatic contributions to CF parameters in this complex, calculated with true charge distributions in the ligands, yield less than half of the total CF splitting, thus pointing to the dominant role of covalent effects. This analysis allows the conclusion that ab initio crystal field is an essential tool for the decent description of lanthanides. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann
2014-04-01
Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.
2013-01-01
Background Life cycle assessment (LCA) is a systems-based method used to determine potential impacts to the environment associated with a product throughout its life cycle. Conclusions from LCA studies can be applied to support decisions regarding product design or public policy, therefore, all relevant inputs (e.g., raw materials, energy) and outputs (e.g., emissions, waste) to the product system should be evaluated to estimate impacts. Currently, work-related impacts are not routinely considered in LCA. The objectives of this paper are: 1) introduce the work environment disability-adjusted life year (WE-DALY), one portion of a characterization factor used to express the magnitude of impacts to human health attributable to work-related exposures to workplace hazards; 2) outline the methods for calculating the WE-DALY; 3) demonstrate the calculation; and 4) highlight strengths and weaknesses of the methodological approach. Methods The concept of the WE-DALY and the methodological approach to its calculation is grounded in the World Health Organization’s disability-adjusted life year (DALY). Like the DALY, the WE-DALY equation considers the years of life lost due to premature mortality and the years of life lived with disability outcomes to estimate the total number of years of healthy life lost in a population. The equation requires input in the form of the number of fatal and nonfatal injuries and illnesses that occur in the industries relevant to the product system evaluated in the LCA study, the age of the worker at the time of the fatal or nonfatal injury or illness, the severity of the injury or illness, and the duration of time lived with the outcomes of the injury or illness. Results The methodological approach for the WE-DALY requires data from various sources, multi-step instructions to determine each variable used in the WE-DALY equation, and assumptions based on professional opinion. Conclusions Results support the use of the WE-DALY in a characterization factor in LCA. Integrating occupational health into LCA studies will provide opportunities to prevent shifting of impacts between the work environment and the environment external to the workplace and co-optimize human health, to include worker health, and environmental health. PMID:23497039
Development of a real-time transport performance optimization methodology
NASA Technical Reports Server (NTRS)
Gilyard, Glenn
1996-01-01
The practical application of real-time performance optimization is addressed (using a wide-body transport simulation) based on real-time measurements and calculation of incremental drag from forced response maneuvers. Various controller combinations can be envisioned although this study used symmetric outboard aileron and stabilizer. The approach is based on navigation instrumentation and other measurements found on state-of-the-art transports. This information is used to calculate winds and angle of attack. Thrust is estimated from a representative engine model as a function of measured variables. The lift and drag equations are then used to calculate lift and drag coefficients. An expression for drag coefficient, which is a function of parasite drag, induced drag, and aileron drag, is solved from forced excitation response data. Estimates of the parasite drag, curvature of the aileron drag variation, and minimum drag aileron position are produced. Minimum drag is then obtained by repositioning the symmetric aileron. Simulation results are also presented which evaluate the affects of measurement bias and resolution.
Approaches to Children’s Exposure Assessment: Case Study with Diethylhexylphthalate (DEHP)
Ginsberg, Gary; Ginsberg, Justine; Foos, Brenda
2016-01-01
Children’s exposure assessment is a key input into epidemiology studies, risk assessment and source apportionment. The goals of this article are to describe a methodology for children’s exposure assessment that can be used for these purposes and to apply the methodology to source apportionment for the case study chemical, diethylhexylphthalate (DEHP). A key feature is the comparison of total (aggregate) exposure calculated via a pathways approach to that derived from a biomonitoring approach. The 4-step methodology and its results for DEHP are: (1) Prioritization of life stages and exposure pathways, with pregnancy, breast-fed infants, and toddlers the focus of the case study and pathways selected that are relevant to these groups; (2) Estimation of pathway-specific exposures by life stage wherein diet was found to be the largest contributor for pregnant women, breast milk and mouthing behavior for the nursing infant and diet, house dust, and mouthing for toddlers; (3) Comparison of aggregate exposure by pathways vs biomonitoring-based approaches wherein good concordance was found for toddlers and pregnant women providing confidence in the exposure assessment; (4) Source apportionment in which DEHP presence in foods, children’s products, consumer products and the built environment are discussed with respect to early life mouthing, house dust and dietary exposure. A potential fifth step of the method involves the calculation of exposure doses for risk assessment which is described but outside the scope for the current case study. In summary, the methodology has been used to synthesize the available information to identify key sources of early life exposure to DEHP. PMID:27376320
The Future Impact of Wind on BPA Power System Load Following and Regulation Requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Lu, Shuai; McManus, Bart
Wind power is growing in a very fast pace as an alternative generating resource. As the ratio of wind power over total system capacity increases, the impact of wind on various system aspects becomes significant. This paper presents a methodology to study the future impact of wind on BPA power system load following and regulation requirements. Existing methodologies for similar analysis include dispatch model simulation and standard deviation evaluation on load and wind data. The methodology proposed in this paper uses historical data and stochastic processes to simulate the load balancing processes in the BPA power system. It mimics themore » actual power system operations therefore the results are close to reality yet the study based on this methodology is convenient to perform. The capacity, ramp rate and ramp duration characteristics are extracted from the simulation results. System load following and regulation capacity requirements are calculated accordingly. The ramp rate and ramp duration data obtained from the analysis can be used to evaluate generator response or maneuverability requirement and regulating units’ energy requirement, respectively.« less
Molinos-Senante, María; Hernández-Sancho, Francesc; Sala-Garrido, Ramón
2012-01-01
The concept of sustainability involves the integration of economic, environmental, and social aspects and this also applies in the field of wastewater treatment. Economic feasibility studies are a key tool for selecting the most appropriate option from a set of technological proposals. Moreover, these studies are needed to assess the viability of transferring new technologies from pilot-scale to full-scale. In traditional economic feasibility studies, the benefits that have no market price, such as environmental benefits, are not considered and are therefore underestimated. To overcome this limitation, we propose a new methodology to assess the economic viability of wastewater treatment technologies that considers internal and external impacts. The estimation of the costs is based on the use of cost functions. To quantify the environmental benefits from wastewater treatment, the distance function methodology is proposed to estimate the shadow price of each pollutant removed in the wastewater treatment. The application of this methodological approach by decision makers enables the calculation of the true costs and benefits associated with each alternative technology. The proposed methodology is presented as a useful tool to support decision making.
Tavares, Alexandre Oliveira; Barros, José Leandro; Santos, Angela
2017-04-01
This study presents a new multidimensional methodology for tsunami vulnerability assessment that combines the morphological, structural, social, and tax component of vulnerability. This new approach can be distinguished from previous methodologies that focused primarily on the evaluation of potentially affected buildings and did not use tsunami numerical modeling. The methodology was applied to the Figueira da Foz and Vila do Bispo municipalities in Portugal. For each area, the potential tsunami-inundated areas were calculated considering the 1755 Lisbon tsunami, which is the greatest disaster caused by natural hazards that ever occurred in Portugal. Furthermore, the four components of the vulnerability were calculated to obtain a composite vulnerability index. This methodology enables us to differentiate the two areas in their vulnerability, highlighting the characteristics of the territory components. This methodology can be a starting point for the creation of a local assessment framework at the municipal scale related to tsunami risk. In addition, the methodology is an important support for the different local stakeholders. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.
2006-02-01
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.
2004-08-01
Based on Exergy Methods”, Journal of Aircraft Vol.40, No.1, January-February 2003. [2] Bejan, A., “Constructal Theory: Tree-Shaped Flows and Energy... Journal of Aircraft Vol. 36, No. 2, March- April 1999. [15] Bourdin, P., Numerical Prediction of Wing-Tip Effects On Lift-Induced Drag. International Council of the Aeronautical Sciences, 2002. ...methods were used to calculate the induced drag. The objective of this project is to relate work-potential losses ( exergy destruction) to the
NASA Astrophysics Data System (ADS)
Zacharias, Marios; Giustino, Feliciano
Electron-phonon interactions are of fundamental importance in the study of the optical properties of solids at finite temperatures. Here we present a new first-principles computational technique based on the Williams-Lax theory for performing predictive calculations of the optical spectra, including quantum zero-point renormalization and indirect absorption. The calculation of the Williams-Lax optical spectra is computationally challenging, as it involves the sampling over all possible nuclear quantum states. We develop an efficient computational strategy for performing ''one-shot'' finite-temperature calculations. These require only a single optimal configuration of the atomic positions. We demonstrate our methodology for the case of Si, C, and GaAs, yielding absorption coefficients in good agreement with experiment. This work opens the way for systematic calculations of optical spectra at finite temperature. This work was supported by the UK EPSRC (EP/J009857/1 and EP/M020517/) and the Leverhulme Trust (RL-2012-001), and the Graphene Flagship (EU-FP7-604391).
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-09-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond's checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles' structure was analyzed by Drummond's standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond's criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies.
Rezapour, Aziz; Jafari, Abdosaleh; Mirmasoudi, Kosha; Talebianpour, Hamid
2017-01-01
Health economic evaluation research plays an important role in selecting cost-effective interventions. The purpose of this study was to assess the quality of published articles in Iranian journals related to economic evaluation in health care programs based on Drummond’s checklist in terms of numbers, features, and quality. In the present review study, published articles (Persian and English) in Iranian journals related to economic evaluation in health care programs were searched using electronic databases. In addition, the methodological quality of articles’ structure was analyzed by Drummond’s standard checklist. Based on the inclusion criteria, the search of databases resulted in 27 articles that fully covered economic evaluation in health care programs. A review of articles in accordance with Drummond’s criteria showed that the majority of studies had flaws. The most common methodological weakness in the articles was in terms of cost calculation and valuation. Considering such methodological faults in these studies, it is anticipated that these studies would not provide an appropriate feedback to policy makers to allocate health care resources correctly and select suitable cost-effective interventions. Therefore, researchers are required to comply with the standard guidelines in order to better execute and report on economic evaluation studies. PMID:29234174
NASA Astrophysics Data System (ADS)
Besemer, Abigail E.
Targeted radionuclide therapy is emerging as an attractive treatment option for a broad spectrum of tumor types because it has the potential to simultaneously eradicate both the primary tumor site as well as the metastatic disease throughout the body. Patient-specific absorbed dose calculations for radionuclide therapies are important for reducing the risk of normal tissue complications and optimizing tumor response. However, the only FDA approved software for internal dosimetry calculates doses based on the MIRD methodology which estimates mean organ doses using activity-to-dose scaling factors tabulated from standard phantom geometries. Despite the improved dosimetric accuracy afforded by direct Monte Carlo dosimetry methods these methods are not widely used in routine clinical practice because of the complexity of implementation, lack of relevant standard protocols, and longer dose calculation times. The main goal of this work was to develop a Monte Carlo internal dosimetry platform in order to (1) calculate patient-specific voxelized dose distributions in a clinically feasible time frame, (2) examine and quantify the dosimetric impact of various parameters and methodologies used in 3D internal dosimetry methods, and (3) develop a multi-criteria treatment planning optimization framework for multi-radiopharmaceutical combination therapies. This platform utilizes serial PET/CT or SPECT/CT images to calculate voxelized 3D internal dose distributions with the Monte Carlo code Geant4. Dosimetry can be computed for any diagnostic or therapeutic radiopharmaceutical and for both pre-clinical and clinical applications. In this work, the platform's dosimetry calculations were successfully validated against previously published reference doses values calculated in standard phantoms for a variety of radionuclides, over a wide range of photon and electron energies, and for many different organs and tumor sizes. Retrospective dosimetry was also calculated for various pre-clinical and clinical patients and large dosimetric differences resulted when using conventional organ-level methods and the patient-specific voxelized methods described in this work. The dosimetric impact of various steps in the 3D voxelized dosimetry process were evaluated including quantitative imaging acquisition, image coregistration, voxel resampling, ROI contouring, CT-based material segmentation, and pharmacokinetic fitting. Finally, a multi-objective treatment planning optimization framework was developed for multi-radiopharmaceutical combination therapies.
Accuracy of Protein Embedding Potentials: An Analysis in Terms of Electrostatic Potentials.
Olsen, Jógvan Magnus Haugaard; List, Nanna Holmgaard; Kristensen, Kasper; Kongsted, Jacob
2015-04-14
Quantum-mechanical embedding methods have in recent years gained significant interest and may now be applied to predict a wide range of molecular properties calculated at different levels of theory. To reach a high level of accuracy in embedding methods, both the electronic structure model of the active region and the embedding potential need to be of sufficiently high quality. In fact, failures in quantum mechanics/molecular mechanics (QM/MM)-based embedding methods have often been associated with the QM/MM methodology itself; however, in many cases the reason for such failures is due to the use of an inaccurate embedding potential. In this paper, we investigate in detail the quality of the electronic component of embedding potentials designed for calculations on protein biostructures. We show that very accurate explicitly polarizable embedding potentials may be efficiently designed using fragmentation strategies combined with single-fragment ab initio calculations. In fact, due to the self-interaction error in Kohn-Sham density functional theory (KS-DFT), use of large full-structure quantum-mechanical calculations based on conventional (hybrid) functionals leads to less accurate embedding potentials than fragment-based approaches. We also find that standard protein force fields yield poor embedding potentials, and it is therefore not advisable to use such force fields in general QM/MM-type calculations of molecular properties other than energies and structures.
Lithography-based automation in the design of program defect masks
NASA Astrophysics Data System (ADS)
Vakanas, George P.; Munir, Saghir; Tejnil, Edita; Bald, Daniel J.; Nagpal, Rajesh
2004-05-01
In this work, we are reporting on a lithography-based methodology and automation in the design of Program Defect masks (PDM"s). Leading edge technology masks have ever-shrinking primary features and more pronounced model-based secondary features such as optical proximity corrections (OPC), sub-resolution assist features (SRAF"s) and phase-shifted mask (PSM) structures. In order to define defect disposition specifications for critical layers of a technology node, experience alone in deciding worst-case scenarios for the placement of program defects is necessary but may not be sufficient. MEEF calculations initiated from layout pattern data and their integration in a PDM layout flow provide a natural approach for improvements, relevance and accuracy in the placement of programmed defects. This methodology provides closed-loop feedback between layout and hard defect disposition specifications, thereby minimizing engineering test restarts, improving quality and reducing cost of high-end masks. Apart from SEMI and industry standards, best-known methods (BKM"s) in integrated lithographically-based layout methodologies and automation specific to PDM"s are scarce. The contribution of this paper lies in the implementation of Design-For-Test (DFT) principles to a synergistic interaction of CAD Layout and Aerial Image Simulator to drive layout improvements, highlight layout-to-fracture interactions and output accurate program defect placement coordinates to be used by tools in the mask shop.
The estimation of absorbed dose rates for non-human biota : an extended inter-comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batlle, J. V. I.; Beaugelin-Seiller, K.; Beresford, N. A.
An exercise to compare 10 approaches for the calculation of unweighted whole-body absorbed dose rates was conducted for 74 radionuclides and five of the ICRP's Reference Animals and Plants, or RAPs (duck, frog, flatfish egg, rat and elongated earthworm), selected for this exercise to cover a range of body sizes, dimensions and exposure scenarios. Results were analysed using a non-parametric method requiring no specific hypotheses about the statistical distribution of data. The obtained unweighted absorbed dose rates for internal exposure compare well between the different approaches, with 70% of the results falling within a range of variation of {+-}20%. Themore » variation is greater for external exposure, although 90% of the estimates are within an order of magnitude of one another. There are some discernible patterns where specific models over- or under-predicted. These are explained based on the methodological differences including number of daughter products included in the calculation of dose rate for a parent nuclide; source-target geometry; databases for discrete energy and yield of radionuclides; rounding errors in integration algorithms; and intrinsic differences in calculation methods. For certain radionuclides, these factors combine to generate systematic variations between approaches. Overall, the technique chosen to interpret the data enabled methodological differences in dosimetry calculations to be quantified and compared, allowing the identification of common issues between different approaches and providing greater assurance on the fundamental dose conversion coefficient approaches used in available models for assessing radiological effects to biota.« less
Characterization of a mine fire using atmospheric monitoring system sensor data
Yuan, L.; Thomas, R.A.; Zhou, L.
2017-01-01
Atmospheric monitoring systems (AMS) have been widely used in underground coal mines in the United States for the detection of fire in the belt entry and the monitoring of other ventilation-related parameters such as airflow velocity and methane concentration in specific mine locations. In addition to an AMS being able to detect a mine fire, the AMS data have the potential to provide fire characteristic information such as fire growth — in terms of heat release rate — and exact fire location. Such information is critical in making decisions regarding fire-fighting strategies, underground personnel evacuation and optimal escape routes. In this study, a methodology was developed to calculate the fire heat release rate using AMS sensor data for carbon monoxide concentration, carbon dioxide concentration and airflow velocity based on the theory of heat and species transfer in ventilation airflow. Full-scale mine fire experiments were then conducted in the Pittsburgh Mining Research Division’s Safety Research Coal Mine using an AMS with different fire sources. Sensor data collected from the experiments were used to calculate the heat release rates of the fires using this methodology. The calculated heat release rate was compared with the value determined from the mass loss rate of the combustible material using a digital load cell. The experimental results show that the heat release rate of a mine fire can be calculated using AMS sensor data with reasonable accuracy. PMID:28845058
Grain-Boundary Resistance in Copper Interconnects: From an Atomistic Model to a Neural Network
NASA Astrophysics Data System (ADS)
Valencia, Daniel; Wilson, Evan; Jiang, Zhengping; Valencia-Zapata, Gustavo A.; Wang, Kuang-Chung; Klimeck, Gerhard; Povolotskyi, Michael
2018-04-01
Orientation effects on the specific resistance of copper grain boundaries are studied systematically with two different atomistic tight-binding methods. A methodology is developed to model the specific resistance of grain boundaries in the ballistic limit using the embedded atom model, tight- binding methods, and nonequilibrium Green's functions. The methodology is validated against first-principles calculations for thin films with a single coincident grain boundary, with 6.4% deviation in the specific resistance. A statistical ensemble of 600 large, random structures with grains is studied. For structures with three grains, it is found that the distribution of specific resistances is close to normal. Finally, a compact model for grain-boundary-specific resistance is constructed based on a neural network.
NASA Astrophysics Data System (ADS)
Almeida, Isabel P.; Schyns, Lotte E. J. R.; Vaniqui, Ana; van der Heyden, Brent; Dedes, George; Resch, Andreas F.; Kamp, Florian; Zindler, Jaap D.; Parodi, Katia; Landry, Guillaume; Verhaegen, Frank
2018-06-01
Proton beam ranges derived from dual-energy computed tomography (DECT) images from a dual-spiral radiotherapy (RT)-specific CT scanner were assessed using Monte Carlo (MC) dose calculations. Images from a dual-source and a twin-beam DECT scanner were also used to establish a comparison to the RT-specific scanner. Proton ranges extracted from conventional single-energy CT (SECT) were additionally performed to benchmark against literature values. Using two phantoms, a DECT methodology was tested as input for GEANT4 MC proton dose calculations. Proton ranges were calculated for different mono-energetic proton beams irradiating both phantoms; the results were compared to the ground truth based on the phantom compositions. The same methodology was applied in a head-and-neck cancer patient using both SECT and dual-spiral DECT scans from the RT-specific scanner. A pencil-beam-scanning plan was designed, which was subsequently optimized by MC dose calculations, and differences in proton range for the different image-based simulations were assessed. For phantoms, the DECT method yielded overall better material segmentation with >86% of the voxel correctly assigned for the dual-spiral and dual-source scanners, but only 64% for a twin-beam scanner. For the calibration phantom, the dual-spiral scanner yielded range errors below 1.2 mm (0.6% of range), like the errors yielded by the dual-source scanner (<1.1 mm, <0.5%). With the validation phantom, the dual-spiral scanner yielded errors below 0.8 mm (0.9%), whereas SECT yielded errors up to 1.6 mm (2%). For the patient case, where the absolute truth was missing, proton range differences between DECT and SECT were on average in ‑1.2 ± 1.2 mm (‑0.5% ± 0.5%). MC dose calculations were successfully performed on DECT images, where the dual-spiral scanner resulted in media segmentation and range accuracy as good as the dual-source CT. In the patient, the various methods showed relevant range differences.
NASA Astrophysics Data System (ADS)
Govoni, Marco; Galli, Giulia
Green's function based many-body perturbation theory (MBPT) methods are well established approaches to compute quasiparticle energies and electronic lifetimes. However, their application to large systems - for instance to heterogeneous systems, nanostructured, disordered, and defective materials - has been hindered by high computational costs. We will discuss recent MBPT methodological developments leading to an efficient formulation of electron-electron and electron-phonon interactions, and that can be applied to systems with thousands of electrons. Results using a formulation that does not require the explicit calculation of virtual states, nor the storage and inversion of large dielectric matrices will be presented. We will discuss data collections obtained using the WEST code, the advantages of the algorithms used in WEST over standard techniques, and the parallel performance. Work done in collaboration with I. Hamada, R. McAvoy, P. Scherpelz, and H. Zheng. This work was supported by MICCoM, as part of the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division and by ANL.
Quality of Reporting Nutritional Randomized Controlled Trials in Patients With Cystic Fibrosis.
Daitch, Vered; Babich, Tanya; Singer, Pierre; Leibovici, Leonard
2016-08-01
Randomized controlled trials (RCTs) have a major role in the making of evidence-based guidelines. The aim of the present study was to critically appraise the RCTs that addressed nutritional interventions in patients with cystic fibrosis. Embase, PubMed, and the Cochrane Library were systematically searched until July 2015. Methodology and reporting of nutritional RCTs were evaluated by the Consolidated Standards of Reporting Trials (CONSORT) checklist and additional dimensions relevant to patients with CF. Fifty-one RCTs were included. Full details on methods were provided in a minority of studies. The mean duration of intervention was <6 months. 56.9% of the RCTs did not define a primary outcome; 70.6% of studies did not provide details on sample size calculation; and only 31.4% reported on the subgroup or separated between important subgroups. The examined RCTs were characterized by a weak methodology, a small number of patients with no sample size calculations, a relatively short intervention, and many times did not examine the outcomes that are important to the patient. Improvement over the years has been minor.
Brady, S L; Kaufman, R A
2015-05-01
To develop an automated methodology to estimate patient examination dose in digital radiography (DR) imaging using DICOM metadata as a quality assurance (QA) tool. Patient examination and demographical information were gathered from metadata analysis of DICOM header data. The x-ray system radiation output (i.e., air KERMA) was characterized for all filter combinations used for patient examinations. Average patient thicknesses were measured for head, chest, abdomen, knees, and hands using volumetric images from CT. Backscatter factors (BSFs) were calculated from examination kVp. Patient entrance skin air KERMA (ESAK) was calculated by (1) looking up examination technique factors taken from DICOM header metadata (i.e., kVp and mA s) to derive an air KERMA (k air) value based on an x-ray characteristic radiation output curve; (2) scaling k air with a BSF value; and (3) correcting k air for patient thickness. Finally, patient entrance skin dose (ESD) was calculated by multiplying a mass-energy attenuation coefficient ratio by ESAK. Patient ESD calculations were computed for common DR examinations at our institution: dual view chest, anteroposterior (AP) abdomen, lateral (LAT) skull, dual view knee, and bone age (left hand only) examinations. ESD was calculated for a total of 3794 patients; mean age was 11 ± 8 yr (range: 2 months to 55 yr). The mean ESD range was 0.19-0.42 mGy for dual view chest, 0.28-1.2 mGy for AP abdomen, 0.18-0.65 mGy for LAT view skull, 0.15-0.63 mGy for dual view knee, and 0.10-0.12 mGy for bone age (left hand) examinations. A methodology combining DICOM header metadata and basic x-ray tube characterization curves was demonstrated. In a regulatory era where patient dose reporting has become increasingly in demand, this methodology will allow a knowledgeable user the means to establish an automatable dose reporting program for DR and perform patient dose related QA testing for digital x-ray imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, S. L., E-mail: samuel.brady@stjude.org; Kaufman, R. A., E-mail: robert.kaufman@stjude.org
Purpose: To develop an automated methodology to estimate patient examination dose in digital radiography (DR) imaging using DICOM metadata as a quality assurance (QA) tool. Methods: Patient examination and demographical information were gathered from metadata analysis of DICOM header data. The x-ray system radiation output (i.e., air KERMA) was characterized for all filter combinations used for patient examinations. Average patient thicknesses were measured for head, chest, abdomen, knees, and hands using volumetric images from CT. Backscatter factors (BSFs) were calculated from examination kVp. Patient entrance skin air KERMA (ESAK) was calculated by (1) looking up examination technique factors taken frommore » DICOM header metadata (i.e., kVp and mA s) to derive an air KERMA (k{sub air}) value based on an x-ray characteristic radiation output curve; (2) scaling k{sub air} with a BSF value; and (3) correcting k{sub air} for patient thickness. Finally, patient entrance skin dose (ESD) was calculated by multiplying a mass–energy attenuation coefficient ratio by ESAK. Patient ESD calculations were computed for common DR examinations at our institution: dual view chest, anteroposterior (AP) abdomen, lateral (LAT) skull, dual view knee, and bone age (left hand only) examinations. Results: ESD was calculated for a total of 3794 patients; mean age was 11 ± 8 yr (range: 2 months to 55 yr). The mean ESD range was 0.19–0.42 mGy for dual view chest, 0.28–1.2 mGy for AP abdomen, 0.18–0.65 mGy for LAT view skull, 0.15–0.63 mGy for dual view knee, and 0.10–0.12 mGy for bone age (left hand) examinations. Conclusions: A methodology combining DICOM header metadata and basic x-ray tube characterization curves was demonstrated. In a regulatory era where patient dose reporting has become increasingly in demand, this methodology will allow a knowledgeable user the means to establish an automatable dose reporting program for DR and perform patient dose related QA testing for digital x-ray imaging.« less
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
Computational Modeling of Mixed Solids for CO2 CaptureSorbents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Yuhua
2015-01-01
Since current technologies for capturing CO2 to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO2 reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO2 capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculated thermodynamic properties of differentmore » classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO2 sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO2 capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we apply our screening methodology to mixing solid systems to adjust the turnover temperature to help on developing CO2 capture Technologies.« less
Lerner, E Brooke; Garrison, Herbert G; Nichol, Graham; Maio, Ronald F; Lookman, Hunaid A; Sheahan, William D; Franz, Timothy R; Austad, James D; Ginster, Aaron M; Spaite, Daniel W
2012-02-01
Calculating the cost of an emergency medical services (EMS) system using a standardized method is important for determining the value of EMS. This article describes the development of a methodology for calculating the cost of an EMS system to its community. This includes a tool for calculating the cost of EMS (the "cost workbook") and detailed directions for determining cost (the "cost guide"). The 12-step process that was developed is consistent with current theories of health economics, applicable to prehospital care, flexible enough to be used in varying sizes and types of EMS systems, and comprehensive enough to provide meaningful conclusions. It was developed by an expert panel (the EMS Cost Analysis Project [EMSCAP] investigator team) in an iterative process that included pilot testing the process in three diverse communities. The iterative process allowed ongoing modification of the toolkit during the development phase, based upon direct, practical, ongoing interaction with the EMS systems that were using the toolkit. The resulting methodology estimates EMS system costs within a user-defined community, allowing either the number of patients treated or the estimated number of lives saved by EMS to be assessed in light of the cost of those efforts. Much controversy exists about the cost of EMS and whether the resources spent for this purpose are justified. However, the existence of a validated toolkit that provides a standardized process will allow meaningful assessments and comparisons to be made and will supply objective information to inform EMS and community officials who are tasked with determining the utilization of scarce societal resources. © 2012 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Konovodov, V. V.; Valentov, A. V.; Kukhar, I. S.; Retyunskiy, O. Yu; Baraksanov, A. S.
2016-08-01
The work proposes the algorithm to calculate strength under alternating stresses using the developed methodology of building the diagram of limiting stresses. The overall safety factor is defined by the suggested formula. Strength calculations of components working under alternating stresses in the great majority of cases are conducted as the checking ones. It is primarily explained by the fact that the overall fatigue strength reduction factor (Kσg or Kτg) can only be chosen approximately during the component design as the engineer at this stage of work has just the approximate idea on the component size and shape.
Methodology for calculating power consumption of planetary mixers
NASA Astrophysics Data System (ADS)
Antsiferov, S. I.; Voronov, V. P.; Evtushenko, E. I.; Yakovlev, E. A.
2018-03-01
The paper presents the methodology and equations for calculating the power consumption necessary to overcome the resistance of a dry mixture caused by the movement of cylindrical rods in the body of a planetary mixer, as well as the calculation of the power consumed by idling mixers of this type. The equations take into account the size and physico-mechanical properties of mixing material, the size and shape of the mixer's working elements and the kinematics of its movement. The dependence of the power consumption on the angle of rotation in the plane perpendicular to the axis of rotation of the working member is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jetter, R. I.; Messner, M. C.; Sham, T. -L.
The goal of the proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology is to incorporate an SMT data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. This methodology should minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed methodology and to verify the applicability of the code rules, analytical studies and evaluation of thermomechanical test results continuedmore » in FY17. This report presents the results of those studies. An EPP strain limits methodology assessment was based on recent two-bar thermal ratcheting test results on 316H stainless steel in the temperature range of 405 to 7050C. Strain range predictions from the EPP evaluation of the two-bar tests were also evaluated and compared with the experimental results. The role of sustained primary loading on cyclic life was assessed using the results of pressurized SMT data from tests on Alloy 617 at 9500C. A viscoplastic material model was used in an analytic simulation of two-bar tests to compare with EPP strain limits assessments using isochronous stress strain curves that are consistent with the viscoplastic material model. A finite element model of a prior 304H stainless steel Oak Ridge National Laboratory (ORNL) nozzle-to-sphere test was developed and used for an EPP strain limits and creep-fatigue code case damage evaluations. A theoretical treatment of a recurring issue with convergence criteria for plastic shakedown illustrated the role of computer machine precision in EPP calculations.« less
Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza
2016-09-01
The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mert, Aydin; Fahjan, Yasin M.; Hutchings, Lawrence J.; Pınar, Ali
2016-08-01
The main motivation for this study was the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in the Marmara Sea and the disaster risk around the Marmara region, especially in Istanbul. This study provides the results of a physically based probabilistic seismic hazard analysis (PSHA) methodology, using broadband strong ground motion simulations, for sites within the Marmara region, Turkey, that may be vulnerable to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We included the effects of all considerable-magnitude earthquakes. To generate the high-frequency (0.5-20 Hz) part of the broadband earthquake simulation, real, small-magnitude earthquakes recorded by a local seismic array were used as empirical Green's functions. For the frequencies below 0.5 Hz, the simulations were obtained by using synthetic Green's functions, which are synthetic seismograms calculated by an explicit 2D /3D elastic finite difference wave propagation routine. By using a range of rupture scenarios for all considerable-magnitude earthquakes throughout the PIF segments, we produced a hazard calculation for frequencies of 0.1-20 Hz. The physically based PSHA used here followed the same procedure as conventional PSHA, except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes, and this approach utilizes the full rupture of earthquakes along faults. Furthermore, conventional PSHA predicts ground motion parameters by using empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitudes of earthquakes to obtain ground motion parameters. PSHA results were produced for 2, 10, and 50 % hazards for all sites studied in the Marmara region.
Development of a Probabilistic Assessment Methodology for Evaluation of Carbon Dioxide Storage
Burruss, Robert A.; Brennan, Sean T.; Freeman, P.A.; Merrill, Matthew D.; Ruppert, Leslie F.; Becker, Mark F.; Herkelrath, William N.; Kharaka, Yousif K.; Neuzil, Christopher E.; Swanson, Sharon M.; Cook, Troy A.; Klett, Timothy R.; Nelson, Philip H.; Schenk, Christopher J.
2009-01-01
This report describes a probabilistic assessment methodology developed by the U.S. Geological Survey (USGS) for evaluation of the resource potential for storage of carbon dioxide (CO2) in the subsurface of the United States as authorized by the Energy Independence and Security Act (Public Law 110-140, 2007). The methodology is based on USGS assessment methodologies for oil and gas resources created and refined over the last 30 years. The resource that is evaluated is the volume of pore space in the subsurface in the depth range of 3,000 to 13,000 feet that can be described within a geologically defined storage assessment unit consisting of a storage formation and an enclosing seal formation. Storage assessment units are divided into physical traps (PTs), which in most cases are oil and gas reservoirs, and the surrounding saline formation (SF), which encompasses the remainder of the storage formation. The storage resource is determined separately for these two types of storage. Monte Carlo simulation methods are used to calculate a distribution of the potential storage size for individual PTs and the SF. To estimate the aggregate storage resource of all PTs, a second Monte Carlo simulation step is used to sample the size and number of PTs. The probability of successful storage for individual PTs or the entire SF, defined in this methodology by the likelihood that the amount of CO2 stored will be greater than a prescribed minimum, is based on an estimate of the probability of containment using present-day geologic knowledge. The report concludes with a brief discussion of needed research data that could be used to refine assessment methodologies for CO2 sequestration.
Development and application of a hybrid transport methodology for active interrogation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Royston, K.; Walters, W.; Haghighat, A.
A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, 7) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water cargo. To complete the first step, a response-function formulation has been developed tomore » calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, 7) cross sections to find the resulting gamma source distribution. In the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma current at a detector window. The AIMS (Active Interrogation for Monitoring Special-Nuclear-Materials) software has been written to output the gamma current for a source-detector assembly scanning across a cargo container using the pre-calculated values and taking significantly less time than a reference MCNP5 calculation. (authors)« less
Helal-Neto, Edward; Cabezas, Santiago Sánchez; Sancenón, Félix; Martínez-Máñez, Ramón; Santos-Oliveira, Ralph
2018-05-10
The use of monoclonal antibodies (Mab) in the current medicine is increasing. Antibody-drug conjugates (ADCs) represents an increasingly and important modality for treating several types of cancer. In this area, the use of Mab associated with nanoparticles is a valuable strategy. However, the methodology used to calculate the Mab entrapment, efficiency and content is extremely expensive. In this study we developed and tested a novel very simple one-step methodology to calculate monoclonal antibody entrapment in mesoporous silica (with magnetic core) nanoparticles using the radiolabeling process as primary methodology. The magnetic core mesoporous silica were successfully developed and characterised. The PXRD analysis at high angles confirmed the presence of magnetic cores in the structures and transmission electron microscopy allowed to determine structures size (58.9 ± 8.1 nm). From the isotherm curve, a specific surface area of 872 m 2 /g was estimated along with a pore volume of 0.85 cm 3 /g and an average pore diameter of 3.15 nm. The radiolabeling process to proceed the indirect determination were well-done. Trastuzumab were successfully labeled (>97%) with Tc-99m generating a clear suspension. Besides, almost all the Tc-99m used (labeling the trastuzumab) remained trapped in the surface of the mesoporous silica for a period as long as 8 h. The indirect methodology demonstrated a high entrapment in magnetic core mesoporous silica surface of Tc-99m-traztuzumab. The results confirmed the potential use from the indirect entrapment efficiency methodology using the radiolabeling process, as a one-step, easy and cheap methodology. Copyright © 2018 Elsevier B.V. All rights reserved.
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
Bao, Yihai; Main, Joseph A; Noh, Sam-Young
2017-08-01
A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness.
NASA Astrophysics Data System (ADS)
Brezgin, V. I.; Brodov, Yu M.; Kultishev, A. Yu
2017-11-01
The report presents improvement methods review in the fields of the steam turbine units design and operation based on modern information technologies application. In accordance with the life cycle methodology support, a conceptual model of the information support system during life cycle main stages (LC) of steam turbine unit is suggested. A classifying system, which ensures the creation of sustainable information links between the engineer team (manufacture’s plant) and customer organizations (power plants), is proposed. Within report, the principle of parameterization expansion beyond the geometric constructions at the design and improvement process of steam turbine unit equipment is proposed, studied and justified. The report presents the steam turbine unit equipment design methodology based on the brand new oil-cooler design system that have been developed and implemented by authors. This design system combines the construction subsystem, which is characterized by extensive usage of family tables and templates, and computation subsystem, which includes a methodology for the thermal-hydraulic zone-by-zone oil coolers design calculations. The report presents data about the developed software for operational monitoring, assessment of equipment parameters features as well as its implementation on five power plants.
Strength Property Estimation for Dry, Cohesionless Soils Using the Military Cone Penetrometer
1992-05-01
by Meier and Baladi (1988). Their methodology is based on a theoretical formulation of the CI problem using cavity expansion theory to relate cone... Baladi (1981), incorporates three mechanical properties (cohesion, fric- tion angle, and shear modulus) and the total unit weight. Obviously, these four...unknown soil propertieE cannot be back-calculated directly from a single CI measurement. To ameliorate this problem, Meier and Baladi estimate the total
A new mathematical modeling approach for the energy of threonine molecule
NASA Astrophysics Data System (ADS)
Sahiner, Ahmet; Kapusuz, Gulden; Yilmaz, Nurullah
2017-07-01
In this paper, we propose an improved new methodology in energy conformation problems for finding optimum energy values. First, we construct the Bezier surfaces near local minimizers based on the data obtained from Density Functional Theory (DFT) calculations. Second, we blend the constructed surfaces in order to obtain a single smooth model. Finally, we apply the global optimization algorithm to find two torsion angles those make the energy of the molecule minimum.
Current Evidence on Heart Rate Variability Biofeedback as a Complementary Anticraving Intervention.
Alayan, Nour; Eller, Lucille; Bates, Marsha E; Carmody, Dennis P
2018-05-21
The limited success of conventional anticraving interventions encourages research into new treatment strategies. Heart rate variability biofeedback (HRVB), which is based on slowed breathing, was shown to improve symptom severity in various disorders. HRVB, and certain rates of controlled breathing (CB), may offer therapeutic potential as a complementary drug-free treatment option to help control substance craving. This review evaluated current evidence on the effectiveness of HRVB and CB training as a complementary anticraving intervention, based on guidelines from the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols. Studies that assessed a cardiorespiratory feedback or CB intervention with substance craving as an outcome were selected. Effect sizes were calculated for each study. The Scale for Assessing Scientific Quality of Investigations in Complementary and Alternative Medicine was used to evaluate the quality of each study reviewed. A total of eight articles remained for final review, including controlled studies with or without randomization, as well as noncontrolled trials. Most studies showed positive results with a variety of methodological quality levels and effect size. Current HRVB studies rated moderately on methodological rigor and showed inconsistent magnitudes of calculated effect size (0.074-0.727) across populations. The largest effect size was found in a nonclinical college population of high food cravers utilizing the most intensive HRVB training time of 240 min. Despite the limitations of this review, there is beginning evidence that HRVB and CB training can be of significant therapeutic potential. Larger clinical trials are needed with methodological improvements such as longer treatment duration, adequate control conditions, measures of adherence and compliance, longitudinal examination of craving changes, and more comprehensive methods of craving measurement.
Computational studies of horizontal axis wind turbines
NASA Astrophysics Data System (ADS)
Xu, Guanpeng
A numerical technique has been developed for efficiently simulating fully three-dimensional viscous fluid flow around horizontal axis wind turbines (HAWT) using a zonal approach. The flow field is viewed as a combination of viscous regions, inviscid regions and vortices. The method solves the costly unsteady Reynolds averaged Navier-Stokes (RANS) equations only in the viscous region around the turbine blades. It solves the full potential equation in the inviscid region where flow is irrotational and isentropic. The tip vortices are simulated using a Lagrangean approach, thus removing the need to accurately resolve them on a fine grid. The hybrid method is shown to provide good results with modest CPU resources. A full Navier-Stokes based methodology has also been developed for modeling wind turbines at high wind conditions where extensive stall may occur. An overset grid based version that can model rotor-tower interactions has been developed. Finally, a blade element theory based methodology has been developed for the purpose of developing improved tip loss models and stall delay models. The effects of turbulence are simulated using a zero equation eddy viscosity model, or a one equation Spalart-Allmaras model. Two transition models, one based on the Eppler's criterion, and the other based on Michel's criterion, have been developed and tested. The hybrid method has been extensively validated for axial wind conditions for three rotors---NREL Phase II, Phase III, and Phase VI configurations. A limited set of calculations has been done for rotors operating under yaw conditions. Preliminary simulations have also been carried out to assess the effects of the tower wake on the rotor. In most of these cases, satisfactory agreement has been obtained with measurements. Using the numerical results from present methodologies as a guide, Prandtl's tip loss model and Corrigan's stall delay model were correlated with present calculations. An improved tip loss model has been obtained. A correction to the Corrigan's stall delay model has also been developed. Incorporation of these corrections is shown to considerably improve power predictions, even when a very simple aerodynamic theory---blade element method with annular inflow---is used.
2014-01-01
Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453
Composition Optimization of Lithium-Based Ternary Alloy Blankets for Fusion Reactors
NASA Astrophysics Data System (ADS)
Jolodosky, Alejandra
The goal of this dissertation is to examine the neutronic properties of a novel type of fusion reactor blanket material in the form of lithium-based ternary alloys. Pure liquid lithium, first proposed as a blanket for fusion reactors, is utilized as both a tritium breeder and a coolant. It has many attractive features such as high heat transfer and low corrosion properties, but most importantly, it has a very high tritium solubility and results in very low levels of tritium permeation throughout the facility infrastructure. However, lithium metal vigorously reacts with air and water and presents plant safety concerns including degradation of the concrete containment structure. The work of this thesis began as a collaboration with Lawrence Livermore National Laboratory in an effort to develop a lithium-based ternary alloy that can maintain the beneficial properties of lithium while reducing the reactivity concerns. The first studies down-selected alloys based on the analysis and performance of both neutronic and activation characteristics. First, 3-D Monte Carlo calculations were performed to evaluate two main neutronics performance parameters for the blanket: tritium breeding ratio (TBR), and energy multiplication factor (EMF). Alloys with adequate results based on TBR and EMF calculations were considered for activation analysis. Activation simulations were executed with 50 years of irradiation and 300 years of cooling. It was discovered that bismuth is a poor choice due to achieving the highest decay heat, contact dose rates, and accident doses. In addition, it does not meet the waste disposal ratings (WDR). The straightforward approach to obtain Monte Carlo TBR and EMF results required 231 simulations per alloy and became computationally expensive, time consuming, and inefficient. Consequently, alternate methods were pursued. A collision history-based methodology recently developed for the Monte Carlo code Serpent, calculates perturbation effects on practically any quantity of interest. This allows multiple responses to be calculated by perturbing the input parameter without having to directly perform separate calculations. The approach is strictly created for critical systems, but was utilized as the basis of a new methodology implemented for fixed source problems, known as Exact Perturbation Theory (EPT). EPT can calculate the tritium breeding ratio response, caused by a perturbation in the composition of the ternary alloy. The downfall of EPT methodology is that it cannot account for the collision history at large perturbations and thus, produces results with high uncertainties. Preliminary analysis for EPT with Serpent for a LiPbBa alloy demonstrated that 25 simulations per ternary must be completed so that most uncertainties calculated at large perturbations do not exceed 0.05. To reduce the uncertainties of the results, generalized least squares (GSL) method was implemented, to replace imprecise TBR results with more accurate ones. It was demonstrated that a combination of EPT Serpent calculations with the application of GLS for results with high uncertainties is the most effective and produces values with the highest fidelity. The scheme finds an alloy composition that has a TBR within a range of interest, while imposing constraint on the EMF, and a requirement to minimize lithium concentration. It involved a three-level iteration process with each level zooming in closer on the area of interest to fine tune the correct composition. Both alloys studied, LiPbBa and LiSnZn, had optimized compositions close to the leftmost edge of the ternary, increasing the complexity of optimization due to the highly uncertain results found in these regions. Additional GPT methodologies were considered for optimization studies, specifically with the use of deterministic codes. Currently, an optimization deterministic code, SMORES, is available in the SCALE code package, but only for critical systems. Subsequently, it was desired to change this code to solve problems for fusion reactors similarly to what was done in SWAN. So far, the fixed and adjoint source declaration and definition was added to the input file. As a result, alterations were made to the source code so that it can read in and utilize the new input information. Due to time constraints, only a detailed outline has been created that includes the steps one has to take to make the transition of SMORES from critical systems to fixed source problems. Additional time constraints limited the goal to perform chemical reactivity experiments on candidate alloys. Nevertheless, a review of past experiments was done and it was determined that large-scale experiments seem more appropriate for the purpose of this work, as they would better depict how the alloys would behave in the actual reactor environment. Both air and water reactions should be considered when examining the potential chemical reactions of the lithium alloy.
Methodology for worker neutron exposure evaluation in the PDCF facility design.
Scherpelz, R I; Traub, R J; Pryor, K H
2004-01-01
A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons were learned from this effect. This paper addresses these issues and the resulting methodology.
UW Inventory of Freight Emissions (WIFE3) heavy duty diesel vehicle web calculator methodology.
DOT National Transportation Integrated Search
2013-09-01
This document serves as an overview and technical documentation for the University of Wisconsin Inventory of : Freight Emissions (WIFE3) calculator. The WIFE3 web calculator rapidly estimates future heavy duty diesel : vehicle (HDDV) roadway emission...
Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424
Optimization of coupled multiphysics methodology for safety analysis of pebble bed modular reactor
NASA Astrophysics Data System (ADS)
Mkhabela, Peter Tshepo
The research conducted within the framework of this PhD thesis is devoted to the high-fidelity multi-physics (based on neutronics/thermal-hydraulics coupling) analysis of Pebble Bed Modular Reactor (PBMR), which is a High Temperature Reactor (HTR). The Next Generation Nuclear Plant (NGNP) will be a HTR design. The core design and safety analysis methods are considerably less developed and mature for HTR analysis than those currently used for Light Water Reactors (LWRs). Compared to LWRs, the HTR transient analysis is more demanding since it requires proper treatment of both slower and much longer transients (of time scale in hours and days) and fast and short transients (of time scale in minutes and seconds). There is limited operation and experimental data available for HTRs for validation of coupled multi-physics methodologies. This PhD work developed and verified reliable high fidelity coupled multi-physics models subsequently implemented in robust, efficient, and accurate computational tools to analyse the neutronics and thermal-hydraulic behaviour for design optimization and safety evaluation of PBMR concept The study provided a contribution to a greater accuracy of neutronics calculations by including the feedback from thermal hydraulics driven temperature calculation and various multi-physics effects that can influence it. Consideration of the feedback due to the influence of leakage was taken into account by development and implementation of improved buckling feedback models. Modifications were made in the calculation procedure to ensure that the xenon depletion models were accurate for proper interpolation from cross section tables. To achieve this, the NEM/THERMIX coupled code system was developed to create the system that is efficient and stable over the duration of transient calculations that last over several tens of hours. Another achievement of the PhD thesis was development and demonstration of full-physics, three-dimensional safety analysis methodology for the PBMR to provide reference solutions. Investigation of different aspects of the coupled methodology and development of efficient kinetics treatment for the PBMR were carried out, which accounts for all feedback phenomena in an efficient manner. The OECD/NEA PBMR-400 coupled code benchmark was used as a test matrix for the proposed investigations. The integrated thermal-hydraulics and neutronics (multi-physics) methods were extended to enable modeling of a wider range of transients pertinent to the PBMR. First, the effect of the spatial mapping schemes (spatial coupling) was studied and quantified for different types of transients, which resulted in implementation of improved mapping methodology based on user defined criteria. The second aspect that was studied and optimized is the temporal coupling and meshing schemes between the neutronics and thermal-hydraulics time step selection algorithms. The coupled code convergence was achieved supplemented by application of methods to accelerate it. Finally, the modeling of all feedback phenomena in PBMRs was investigated and a novel treatment of cross-section dependencies was introduced for improving the representation of cross-section variations. The added benefit was that in the process of studying and improving the coupled multi-physics methodology more insight was gained into the physics and dynamics of PBMR, which will help also to optimize the PBMR design and improve its safety. One unique contribution of the PhD research is the investigation of the importance of the correct representation of the three-dimensional (3-D) effects in the PBMR analysis. The performed studies demonstrated that explicit 3-D modeling of control rod movement is superior and removes the errors associated with the grey curtain (2-D homogenized) approximation.
Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation
NASA Astrophysics Data System (ADS)
Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla
2014-07-01
Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Risk-based maintenance of ethylene oxide production facilities.
Khan, Faisal I; Haddara, Mahmoud R
2004-05-20
This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... information that is sensitive or proprietary, such as detailed process designs or site plans. Because the... Inputs to Emission Equations X Calculation Methodology and Methodological Tier X Data Elements Reported...
Fuller, Clifton D; Diaz, Irma; Cavanaugh, Sean X; Eng, Tony Y
2004-07-01
A patient with base of tongue squamous sell carcinoma, with significant CT artifact-inducing metallic alloy, non-removable dental restorations in both the mandible and maxilla was identified. Simultaneous with IMRT treatment, thermoluminescent dosimeters (TLDs) were placed in the oral cavity. After a series of three treatments, the data from the TLDs and software calculations were analyzed. Analysis of mean in vivo TLD dosimetry reveals differentials from software predicted dose calculation that fall within acceptable dose variation limits. IMRT dose calculation software is a relatively accurate predictor of dose attenuation and augmentation due to dental alloys within the treatment volume, as measured by intra-oral thermoluminescent dosimetry. IMRT represents a safe and effective methodology to treat patients with non-removable metallic dental work who have head and neck cancer.
Finite element calculation of residual stress in dental restorative material
NASA Astrophysics Data System (ADS)
Grassia, Luigi; D'Amore, Alberto
2012-07-01
A finite element methodology for residual stresses calculation in dental restorative materials is proposed. The material under concern is a multifunctional methacrylate-based composite for dental restorations, activated by visible light. Reaction kinetics, curing shrinkage, and viscoelastic relaxation functions were required as input data on a structural finite element solver. Post cure effects were considered in order to quantify the residual stresses coming out from natural contraction with respect to those debited to the chemical shrinkage. The analysis showed for a given test case that residual stresses frozen in the dental restoration at uniform temperature of 37°C are of the same order of magnitude of the strength of the dental composite material per se.
Comparison of measured and calculated composition of irradiated EBR-II blanket assemblies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimm, K. N.
1998-07-13
In anticipation of processing irradiated EBR-II depleted uranium blanket subassemblies in the Fuel Conditioning Facility (FCF) at ANL-West, it has been possible to obtain a limited set of destructive chemical analyses of samples from a single EBR-II blanket subassembly. Comparison of calculated values with these measurements is being used to validate a depletion methodology based on a limited number of generic models of EBR-II to simulate the irradiation history of these subassemblies. Initial comparisons indicate these methods are adequate to meet the operations and material control and accountancy (MC and A) requirements for the FCF, but also indicate several shortcomingsmore » which may be corrected or improved.« less
Adaptive real-time methodology for optimizing energy-efficient computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Chung-Hsing; Feng, Wu-Chun
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to eachmore » process running on a system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lafata, K; Ren, L; Cai, J
2016-06-15
Purpose: To develop a methodology based on digitally-reconstructed-fluoroscopy (DRF) to quantitatively assess target localization accuracy of lung SBRT, and to evaluate using both a dynamic digital phantom and a patient dataset. Methods: For each treatment field, a 10-phase DRF is generated based on the planning 4DCT. Each frame is pre-processed with a morphological top-hat filter, and corresponding beam apertures are projected to each detector plane. A template-matching algorithm based on cross-correlation is used to detect the tumor location in each frame. Tumor motion relative beam aperture is extracted in the superior-inferior direction based on each frame’s impulse response to themore » template, and the mean tumor position (MTP) is calculated as the average tumor displacement. The DRF template coordinates are then transferred to the corresponding MV-cine dataset, which is retrospectively filtered as above. The treatment MTP is calculated within each field’s projection space, relative to the DRF-defined template. The field’s localization error is defined as the difference between the DRF-derived-MTP (planning) and the MV-cine-derived-MTP (delivery). A dynamic digital phantom was used to assess the algorithm’s ability to detect intra-fractional changes in patient alignment, by simulating different spatial variations in the MV-cine and calculating the corresponding change in MTP. Inter-and-intra-fractional variation, IGRT accuracy, and filtering effects were investigated on a patient dataset. Results: Phantom results demonstrated a high accuracy in detecting both translational and rotational variation. The lowest localization error of the patient dataset was achieved at each fraction’s first field (mean=0.38mm), with Fx3 demonstrating a particularly strong correlation between intra-fractional motion-caused localization error and treatment progress. Filtering significantly improved tracking visibility in both the DRF and MV-cine images. Conclusion: We have developed and evaluated a methodology to quantify lung SBRT target localization accuracy based on digitally-reconstructed-fluoroscopy. Our approach may be useful in potentially reducing treatment margins to optimize lung SBRT outcomes. R01-184173.« less
Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M
2016-10-01
To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.
A comparative review of nurse turnover rates and costs across countries.
Duffield, Christine M; Roche, Michael A; Homer, Caroline; Buchan, James; Dimitrelis, Sofia
2014-12-01
To compare nurse turnover rates and costs from four studies in four countries (US, Canada, Australia, New Zealand) that have used the same costing methodology; the original Nursing Turnover Cost Calculation Methodology. Measuring and comparing the costs and rates of turnover is difficult because of differences in definitions and methodologies. Comparative review. Searches were carried out within CINAHL, Business Source Complete and Medline for studies that used the original Nursing Turnover Cost Calculation Methodology and reported on both costs and rates of nurse turnover, published from 2014 and prior. A comparative review of turnover data was conducted using four studies that employed the original Nursing Turnover Cost Calculation Methodology. Costing data items were converted to percentages, while total turnover costs were converted to US 2014 dollars and adjusted according to inflation rates, to permit cross-country comparisons. Despite using the same methodology, Australia reported significantly higher turnover costs ($48,790) due to higher termination (~50% of indirect costs) and temporary replacement costs (~90% of direct costs). Costs were almost 50% lower in the US ($20,561), Canada ($26,652) and New Zealand ($23,711). Turnover rates also varied significantly across countries with the highest rate reported in New Zealand (44·3%) followed by the US (26·8%), Canada (19·9%) and Australia (15·1%). A significant proportion of turnover costs are attributed to temporary replacement, highlighting the importance of nurse retention. The authors suggest a minimum dataset is also required to eliminate potential variability across countries, states, hospitals and departments. © 2014 John Wiley & Sons Ltd.
Bayesian probability of success for clinical trials using historical data
Ibrahim, Joseph G.; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F.; Heyse, Joseph F.
2015-01-01
Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein’s work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. PMID:25339499
Gis-Based Accessibility Analysis of Urban Emergency Shelters: the Case of Adana City
NASA Astrophysics Data System (ADS)
Unal, M.; Uslu, C.
2016-10-01
Accessibility analysis of urban emergency shelters can help support urban disaster prevention planning. Pre-disaster emergency evacuation zoning has become a significant topic on disaster prevention and mitigation research. In this study, we assessed the level of serviceability of urban emergency shelters within maximum capacity, usability, sufficiency and a certain walking time limit by employing spatial analysis techniques of GIS-Network Analyst. The methodology included the following aspects: the distribution analysis of emergency evacuation demands, the calculation of shelter space accessibility and the optimization of evacuation destinations. This methodology was applied to Adana, a city in Turkey, which is located within the Alpine-Himalayan orogenic system, the second major earthquake belt after the Pacific-Belt. It was found that the proposed methodology could be useful in aiding to understand the spatial distribution of urban emergency shelters more accurately and establish effective future urban disaster prevention planning. Additionally, this research provided a feasible way for supporting emergency management in terms of shelter construction, pre-disaster evacuation drills and rescue operations.
Bayesian probability of success for clinical trials using historical data.
Ibrahim, Joseph G; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F; Heyse, Joseph F
2015-01-30
Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein's work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. Copyright © 2014 John Wiley & Sons, Ltd.
EVALUATING METRICS FOR GREEN CHEMISTRIES: INFORMATION AND CALCULATION NEEDS
Research within the U.S. EPA's National Risk Management Research Laboratory is developing a methodology for the evaluation of green chemistries. This methodology called GREENSCOPE (Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Ob...
NASA Astrophysics Data System (ADS)
Androulaki, Eleni; Vergadou, Niki; Ramos, Javier; Economou, Ioannis G.
2012-06-01
Molecular dynamics (MD) simulations have been performed in order to investigate the properties of [C n mim+][Tf2N-] (n = 4, 8, 12) ionic liquids (ILs) in a wide temperature range (298.15-498.15 K) and at atmospheric pressure (1 bar). A previously developed methodology for the calculation of the charge distribution that incorporates ab initio quantum mechanical calculations based on density functional theory (DFT) was used to calculate the partial charges for the classical molecular simulations. The wide range of time scales that characterize the segmental dynamics of these ILs, especially at low temperatures, required very long MD simulations, on the order of several tens of nanoseconds, to calculate the thermodynamic (density, thermal expansion, isothermal compressibility), structural (radial distribution functions between the centers of mass of ions and between individual sites, radial-angular distribution functions) and dynamic (relaxation times of the reorientation of the bonds and the torsion angles, self-diffusion coefficients, shear viscosity) properties. The influence of the temperature and the cation's alkyl chain length on the above-mentioned properties was thoroughly investigated. The calculated thermodynamic (primary and derivative) and structural properties are in good agreement with the experimental data, while the extremely sluggish dynamics of the ILs under study renders the calculation of their transport properties a very complicated and challenging task, especially at low temperatures.
How recalibration method, pricing, and coding affect DRG weights
Carter, Grace M.; Rogowski, Jeannette A.
1992-01-01
We compared diagnosis-related group (DRG) weights calculated using the hospital-specific relative-value (HSR V) methodology with those calculated using the standard methodology for each year from 1985 through 1989 and analyzed differences between the two methods in detail for 1989. We provide evidence suggesting that classification error and subsidies of higher weighted cases by lower weighted cases caused compression in the weights used for payment as late as the fifth year of the prospective payment system. However, later weights calculated by the standard method are not compressed because a statistical correlation between high markups and high case-mix indexes offsets the cross-subsidization. HSR V weights from the same files are compressed because this methodology is more sensitive to cross-subsidies. However, both sets of weights produce equally good estimates of hospital-level costs net of those expenses that are paid by outlier payments. The greater compression of the HSR V weights is counterbalanced by the fact that more high-weight cases qualify as outliers. PMID:10127456
van Mil, Anke C C M; Greyling, Arno; Zock, Peter L; Geleijnse, Johanna M; Hopman, Maria T; Mensink, Ronald P; Reesink, Koen D; Green, Daniel J; Ghiadoni, Lorenzo; Thijssen, Dick H
2016-09-01
Brachial artery flow-mediated dilation (FMD) is a popular technique to examine endothelial function in humans. Identifying volunteer and methodological factors related to variation in FMD is important to improve measurement accuracy and applicability. Volunteer-related and methodology-related parameters were collected in 672 volunteers from eight affiliated centres worldwide who underwent repeated measures of FMD. All centres adopted contemporary expert-consensus guidelines for FMD assessment. After calculating the coefficient of variation (%) of the FMD for each individual, we constructed quartiles (n = 168 per quartile). Based on two regression models (volunteer-related factors and methodology-related factors), statistically significant components of these two models were added to a final regression model (calculated as β-coefficient and R). This allowed us to identify factors that independently contributed to the variation in FMD%. Median coefficient of variation was 17.5%, with healthy volunteers demonstrating a coefficient of variation 9.3%. Regression models revealed age (β = 0.248, P < 0.001), hypertension (β = 0.104, P < 0.001), dyslipidemia (β = 0.331, P < 0.001), time between measurements (β = 0.318, P < 0.001), lab experience (β = -0.133, P < 0.001) and baseline FMD% (β = 0.082, P < 0.05) as contributors to the coefficient of variation. After including all significant factors in the final model, we found that time between measurements, hypertension, baseline FMD% and lab experience with FMD independently predicted brachial artery variability (total R = 0.202). Although FMD% showed good reproducibility, larger variation was observed in conditions with longer time between measurements, hypertension, less experience and lower baseline FMD%. Accounting for these factors may improve FMD% variability.
Efficient free energy calculations of quantum systems through computer simulations
NASA Astrophysics Data System (ADS)
Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo
2009-03-01
In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)
NASA Astrophysics Data System (ADS)
Chepur, Petr; Tarasenko, Alexander; Gruchenkova, Alesya
2017-10-01
The paper has its focus on the problem of estimating the stress-strain state of the vertical steel tanks with the inadmissible geometric imperfections in the wall shape. In the paper, the authors refer to an actual tank to demonstrate that the use of certain design schemes can lead to the raw errors and, accordingly, to the unreliable results. Obviously, these design schemes cannot be based on when choosing the real repair technologies. For that reason, authors performed the calculations of the tank removed out of service for the repair, basing on the developed finite-element model of the VST-5000 tank with a conical roof. The proposed approach was developed for the analysis of the SSS (stress-strain state) of a tank having geometric imperfections of the wall shape. Based on the work results, the following was proposed: to amend the Annex A methodology “Method for calculating the stress-strain state of the tank wall during repair by lifting the tank and replacing the wall metal structures” by inserting the requirement to compulsory consider the actual stiffness of the VST entire structure and its roof when calculating the structure stress-strain state.
Thermodynamics of concentrated solid solution alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Michael C.; Zhang, C.; Gao, P.
This study reviews the three main approaches for predicting the formation of concentrated solid solution alloys (CSSA) and for modeling their thermodynamic properties, in particular, utilizing the methodologies of empirical thermo-physical parameters, CALPHAD method, and first-principles calculations combined with hybrid Monte Carlo/Molecular Dynamics (MC/MD) simulations. In order to speed up CSSA development, a variety of empirical parameters based on Hume-Rothery rules have been developed. Herein, these parameters have been systematically and critically evaluated for their efficiency in predicting solid solution formation. The phase stability of representative CSSA systems is then illustrated from the perspectives of phase diagrams and nucleation drivingmore » force plots of the σ phase using CALPHAD method. The temperature-dependent total entropies of the FCC, BCC, HCP, and σ phases in equimolar compositions of various systems are presented next, followed by the thermodynamic properties of mixing of the BCC phase in Al-containing and Ti-containing refractory metal systems. First-principles calculations on model FCC, BCC and HCP CSSA reveal the presence of both positive and negative vibrational entropies of mixing, while the calculated electronic entropies of mixing are negligible. Temperature dependent configurational entropy is determined from the atomic structures obtained from MC/MD simulations. Current status and challenges in using these methodologies as they pertain to thermodynamic property analysis and CSSA design are discussed.« less
Thermodynamics of concentrated solid solution alloys
Gao, Michael C.; Zhang, C.; Gao, P.; ...
2017-10-12
This study reviews the three main approaches for predicting the formation of concentrated solid solution alloys (CSSA) and for modeling their thermodynamic properties, in particular, utilizing the methodologies of empirical thermo-physical parameters, CALPHAD method, and first-principles calculations combined with hybrid Monte Carlo/Molecular Dynamics (MC/MD) simulations. In order to speed up CSSA development, a variety of empirical parameters based on Hume-Rothery rules have been developed. Herein, these parameters have been systematically and critically evaluated for their efficiency in predicting solid solution formation. The phase stability of representative CSSA systems is then illustrated from the perspectives of phase diagrams and nucleation drivingmore » force plots of the σ phase using CALPHAD method. The temperature-dependent total entropies of the FCC, BCC, HCP, and σ phases in equimolar compositions of various systems are presented next, followed by the thermodynamic properties of mixing of the BCC phase in Al-containing and Ti-containing refractory metal systems. First-principles calculations on model FCC, BCC and HCP CSSA reveal the presence of both positive and negative vibrational entropies of mixing, while the calculated electronic entropies of mixing are negligible. Temperature dependent configurational entropy is determined from the atomic structures obtained from MC/MD simulations. Current status and challenges in using these methodologies as they pertain to thermodynamic property analysis and CSSA design are discussed.« less
NASA Technical Reports Server (NTRS)
Banish, R. Michael; Brantschen, Segolene; Pourpoint, Timothee L.; Wessling, Francis; Sekerka, Robert F.
2003-01-01
This paper presents methodologies for measuring the thermal diffusivity using the difference between temperatures measured at two, essentially independent, locations. A heat pulse is applied for an arbitrary time to one region of the sample; either the inner core or the outer wall. Temperature changes are then monitored versus time. The thermal diffusivity is calculated from the temperature difference versus time. No initial conditions are used directly in the final results.
Application of Tube Dynamics to Non-Statistical Reaction Processes
NASA Astrophysics Data System (ADS)
Gabern, F.; Koon, W. S.; Marsden, J. E.; Ross, S. D.; Yanao, T.
2006-06-01
A technique based on dynamical systems theory is introduced for the computation of lifetime distributions and rates of chemical reactions and scattering phenomena, even in systems that exhibit non-statistical behavior. In particular, we merge invariant manifold tube dynamics with Monte Carlo volume determination for accurate rate calculations. This methodology is applied to a three-degree-of-freedom model problem and some ideas on how it might be extended to higher-degree-of-freedom systems are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disney, R.K.
1994-10-01
The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less
The Long Exercise Test in Periodic Paralysis: A Bayesian Analysis.
Simmons, Daniel B; Lanning, Julie; Cleland, James C; Puwanant, Araya; Twydell, Paul T; Griggs, Robert C; Tawil, Rabi; Logigian, Eric L
2018-05-12
The long exercise test (LET) is used to assess the diagnosis of periodic paralysis (PP), but LET methodology and normal "cut-off" values vary. To determine optimal LET methodology and cut-offs, we reviewed LET data (abductor digiti minimi (ADM) motor response amplitude, area) from 55 PP patients (32 genetically definite) and 125 controls. Receiver operating characteristic (ROC) curves were constructed and area-under-the-curve (AUC) calculated to compare 1) peak-to-nadir versus baseline-to-nadir methodologies, and 2) amplitude versus area decrements. Using Bayesian principles, optimal "cut-off" decrements that achieved 95% post-test probability of PP were calculated for various pre-test probabilities (PreTPs). AUC was highest for peak-to-nadir methodology and equal for amplitude and area decrements. For PreTP ≤50%, optimal decrement cut-offs (peak-to-nadir) were >40% (amplitude) or >50% (area). For confirmation of PP, our data endorse the diagnostic utility of peak-to-nadir LET methodology using 40% amplitude or 50% area decrement cut-offs for PreTPs ≤50%. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
A Five-Dimensional Mathematical Model for Regional and Global Changes in Cardiac Uptake and Motion
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Gifford, H. C.
2004-10-01
The objective of this work was to simultaneously introduce known regional changes in contraction pattern and perfusion to the existing gated Mathematical Cardiac Torso (MCAT) phantom heart model. We derived a simple integral to calculate the fraction of the ellipsoidal volume that makes up the left ventricle (LV), taking into account the stationary apex and the moving base. After calculating the LV myocardium volume of the existing beating heart model, we employed the property of conservation of mass to manipulate the LV ejection fraction to values ranging between 13.5% and 68.9%. Multiple dynamic heart models that differ in degree of LV wall thickening, base-to-apex motion, and ejection fraction, are thus available for use with the existing MCAT methodology. To introduce more complex regional LV contraction and perfusion patterns, we used composites of dynamic heart models to create a central region with little or no motion or perfusion, surrounded by a region in which the motion and perfusion gradually reverts to normal. To illustrate this methodology, the following gated cardiac acquisitions for different clinical situations were simulated analytically: 1) reduced regional motion and perfusion; 2) same perfusion as in (1) without motion intervention; and 3) washout from the normal and diseased myocardial regions. Both motion and perfusion can change dynamically during a single rotation or multiple rotations of a simulated single-photon emission computed tomography acquisition system.
Car-Parrinello simulation of hydrogen bond dynamics in sodium hydrogen bissulfate.
Pirc, Gordana; Stare, Jernej; Mavri, Janez
2010-06-14
We studied proton dynamics of a short hydrogen bond of the crystalline sodium hydrogen bissulfate, a hydrogen-bonded ferroelectric system. Our approach was based on the established Car-Parrinello molecular dynamics (CPMD) methodology, followed by an a posteriori quantization of the OH stretching motion. The latter approach is based on snapshot structures taken from CPMD trajectory, calculation of proton potentials, and solving of the vibrational Schrodinger equation for each of the snapshot potentials. The so obtained contour of the OH stretching band has the center of gravity at about 1540 cm(-1) and a half width of about 700 cm(-1), which is in qualitative agreement with the experimental infrared spectrum. The corresponding values for the deuterated form are 1092 and 600 cm(-1), respectively. The hydrogen probability densities obtained by solving the vibrational Schrodinger equation allow for the evaluation of potential of mean force along the proton transfer coordinate. We demonstrate that for the present system the free energy profile is of the single-well type and features a broad and shallow minimum near the center of the hydrogen bond, allowing for frequent and barrierless proton (or deuteron) jumps. All the calculated time-averaged geometric parameters were in reasonable agreement with the experimental neutron diffraction data. As the present methodology for quantization of proton motion is applicable to a variety of hydrogen-bonded systems, it is promising for potential use in computational enzymology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toro, Javier, E-mail: jjtoroca@unal.edu.co; Requena, Ignacio, E-mail: requena@decsai.ugr.es; Duarte, Oscar, E-mail: ogduartev@unal.edu.co
In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes themore » inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown.« less
Lihachev, Alexey; Lihacova, Ilze; Plorina, Emilija V.; Lange, Marta; Derjabo, Alexander; Spigulis, Janis
2018-01-01
A clinical trial on the autofluorescence imaging of skin lesions comprising 16 dermatologically confirmed pigmented nevi, 15 seborrheic keratosis, 2 dysplastic nevi, histologically confirmed 17 basal cell carcinomas and 1 melanoma was performed. The autofluorescence spatial properties of the skin lesions were acquired by smartphone RGB camera under 405 nm LED excitation. The diagnostic criterion is based on the calculation of the mean autofluorescence intensity of the examined lesion in the spectral range of 515 nm–700 nm. The proposed methodology is able to differentiate seborrheic keratosis from basal cell carcinoma, pigmented nevi and melanoma. The sensitivity and specificity of the proposed method was estimated as being close to 100%. The proposed methodology and potential clinical applications are discussed in this article. PMID:29675324
Lihachev, Alexey; Lihacova, Ilze; Plorina, Emilija V; Lange, Marta; Derjabo, Alexander; Spigulis, Janis
2018-04-01
A clinical trial on the autofluorescence imaging of skin lesions comprising 16 dermatologically confirmed pigmented nevi, 15 seborrheic keratosis, 2 dysplastic nevi, histologically confirmed 17 basal cell carcinomas and 1 melanoma was performed. The autofluorescence spatial properties of the skin lesions were acquired by smartphone RGB camera under 405 nm LED excitation. The diagnostic criterion is based on the calculation of the mean autofluorescence intensity of the examined lesion in the spectral range of 515 nm-700 nm. The proposed methodology is able to differentiate seborrheic keratosis from basal cell carcinoma, pigmented nevi and melanoma. The sensitivity and specificity of the proposed method was estimated as being close to 100%. The proposed methodology and potential clinical applications are discussed in this article.
Application of Adjoint Methodology in Various Aspects of Sonic Boom Design
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.
2014-01-01
One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.
Development of performance measurement for freight transportation.
DOT National Transportation Integrated Search
2014-09-01
In this project, the researchers built a set of performance measures that are unified, user-oriented, scalable, systematic, effective, and : calculable for intermodal freight management and developed methodologies to calculate and use the measures. :...
Accounting for the drug life cycle and future drug prices in cost-effectiveness analysis.
Hoyle, Martin
2011-01-01
Economic evaluations of health technologies typically assume constant real drug prices and model only the cohort of patients currently eligible for treatment. It has recently been suggested that, in the UK, we should assume that real drug prices decrease at 4% per annum and, in New Zealand, that real drug prices decrease at 2% per annum and at patent expiry the drug price falls. It has also recently been suggested that we should model multiple future incident cohorts. In this article, the cost effectiveness of drugs is modelled based on these ideas. Algebraic expressions are developed to capture all costs and benefits over the entire life cycle of a new drug. The lifetime of a new drug in the UK, a key model parameter, is estimated as 33 years, based on the historical lifetime of drugs in England over the last 27 years. Under the proposed methodology, cost effectiveness is calculated for seven new drugs recently appraised in the UK. Cost effectiveness as assessed in the future is also estimated. Whilst the article is framed in mathematics, the findings and recommendations are also explained in non-mathematical language. The 'life-cycle correction factor' is introduced, which is used to convert estimates of cost effectiveness as traditionally calculated into estimates under the proposed methodology. Under the proposed methodology, all seven drugs appear far more cost effective in the UK than published. For example, the incremental cost-effectiveness ratio decreases by 46%, from £61, 900 to £33, 500 per QALY, for cinacalcet versus best supportive care for end-stage renal disease, and by 45%, from £31,100 to £17,000 per QALY, for imatinib versus interferon-α for chronic myeloid leukaemia. Assuming real drug prices decrease over time, the chance that a drug is publicly funded increases over time, and is greater when modelling multiple cohorts than with a single cohort. Using the methodology (compared with traditional methodology) all drugs in the UK and New Zealand are predicted to be more cost effective. It is suggested that the willingness-to-pay threshold should be reduced in the UK and New Zealand. The ranking of cost effectiveness will change with drugs assessed as relatively more cost effective and medical devices and surgical procedures relatively less cost effective than previously thought. The methodology is very simple to implement. It is suggested that the model should be parameterized for other countries.
Critical care medicine beds, use, occupancy and costs in the United States: a methodological review
Halpern, Neil A; Pastores, Stephen M.
2017-01-01
This article is a methodological review to help the intensivist gain insights into the classic and sometimes arcane maze of national databases and methodologies used to determine and analyze the intensive care unit (ICU) bed supply, occupancy rates, and costs in the United States (US). Data for total ICU beds, use and occupancy can be derived from two large national healthcare databases: the Healthcare Cost Report Information System (HCRIS) maintained by the federal Centers for Medicare and Medicaid Services (CMS) and the proprietary Hospital Statistics of the American Hospital Association (AHA). Two costing methodologies can be used to calculate ICU costs: the Russell equation and national projections. Both methods are based on cost and use data from the national hospital datasets or from defined groups of hospitals or patients. At the national level, an understanding of US ICU beds, use and cost helps provide clarity to the width and scope of the critical care medicine (CCM) enterprise within the US healthcare system. This review will also help the intensivist better understand published studies on administrative topics related to CCM and be better prepared to participate in their own local hospital organizations or regional CCM programs. PMID:26308432
Using Risk Assessment Methodologies to Meet Management Objectives
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2015-01-01
Current decision making involves numerous possible combinations of technology elements, safety and health issues, operational aspects and process considerations to satisfy program goals. Identifying potential risk considerations as part of the management decision making process provides additional tools to make more informed management decision. Adapting and using risk assessment methodologies can generate new perspectives on various risk and safety concerns that are not immediately apparent. Safety and operational risks can be identified and final decisions can balance these considerations with cost and schedule risks. Additional assessments can also show likelihood of event occurrence and event consequence to provide a more informed basis for decision making, as well as cost effective mitigation strategies. Methodologies available to perform Risk Assessments range from qualitative identification of risk potential, to detailed assessments where quantitative probabilities are calculated. Methodology used should be based on factors that include: 1) type of industry and industry standards, 2) tasks, tools, and environment 3) type and availability of data and 4) industry views and requirements regarding risk & reliability. Risk Assessments are a tool for decision makers to understand potential consequences and be in a position to reduce, mitigate or eliminate costly mistakes or catastrophic failures.
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
TEA: A Code Calculating Thermochemical Equilibrium Abundances
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Methodology for estimating soil carbon for the forest carbon budget model of the United States, 2001
L. S. Heath; R. A. Birdsey; D. W. Williams
2002-01-01
The largest carbon (C) pool in United States forests is the soil C pool. We present methodology and soil C pool estimates used in the FORCARB model, which estimates and projects forest carbon budgets for the United States. The methodology balances knowledge, uncertainties, and ease of use. The estimates are calculated using the USDA Natural Resources Conservation...
Crude and intrinsic birth rates for Asian countries.
Rele, J R
1978-01-01
An attempt to estimate birth rates for Asian countries. The main sources of information in developing countries has been census age-sex distribution, although inaccuracies in the basic data have made it difficult to reach a high degree of accuracy. Different methods bring widely varying results. The methodology presented here is based on the use of the conventional child-woman ratio from the census age-sex distribution, with a rough estimate of the expectation of life at birth. From the established relationships between child-woman ratio and the intrinsic birth rate of the nature y = a + bx + cx(2) at each level of life expectation, the intrinsic birth rate is first computed using coefficients already computed. The crude birth rate is obtained using the adjustment based on the census age-sex distribution. An advantage to this methodology is that the intrinsic birth rate, normally an involved computation, can be obtained relatively easily as a biproduct of the crude birth rates and the bases for the calculations for each of 33 Asian countries, in some cases over several time periods.
NASA Astrophysics Data System (ADS)
Wang, Dai; Gao, Junyu; Li, Pan; Wang, Bin; Zhang, Cong; Saxena, Samveg
2017-08-01
Modeling PEV travel and charging behavior is the key to estimate the charging demand and further explore the potential of providing grid services. This paper presents a stochastic simulation methodology to generate itineraries and charging load profiles for a population of PEVs based on real-world vehicle driving data. In order to describe the sequence of daily travel activities, we use the trip chain model which contains the detailed information of each trip, namely start time, end time, trip distance, start location and end location. A trip chain generation method is developed based on the Naive Bayes model to generate a large number of trips which are temporally and spatially coupled. We apply the proposed methodology to investigate the multi-location charging loads in three different scenarios. Simulation results show that home charging can meet the energy demand of the majority of PEVs in an average condition. In addition, we calculate the lower bound of charging load peak on the premise of lowest charging cost. The results are instructive for the design and construction of charging facilities to avoid excessive infrastructure.
NASA Astrophysics Data System (ADS)
Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha
2016-04-01
This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.
Monte Carol-based validation of neutronic methodology for EBR-II analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, J.R.; Finck, P.J.
1993-01-01
The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less
van Gelder, P.H.A.J.M.; Nijs, M.
2011-01-01
Decisions about pharmacotherapy are being taken by medical doctors and authorities based on comparative studies on the use of medications. In studies on fertility treatments in particular, the methodological quality is of utmost importance in the application of evidence-based medicine and systematic reviews. Nevertheless, flaws and omissions appear quite regularly in these types of studies. Current study aims to present an overview of some of the typical statistical flaws, illustrated by a number of example studies which have been published in peer reviewed journals. Based on an investigation of eleven studies at random selected on fertility treatments with cryopreservation, it appeared that the methodological quality of these studies often did not fulfil the required statistical criteria. The following statistical flaws were identified: flaws in study design, patient selection, and units of analysis or in the definition of the primary endpoints. Other errors could be found in p-value and power calculations or in critical p-value definitions. Proper interpretation of the results and/or use of these study results in a meta analysis should therefore be conducted with care. PMID:24753877
van Gelder, P H A J M; Nijs, M
2011-01-01
Decisions about pharmacotherapy are being taken by medical doctors and authorities based on comparative studies on the use of medications. In studies on fertility treatments in particular, the methodological quality is of utmost -importance in the application of evidence-based medicine and systematic reviews. Nevertheless, flaws and omissions appear quite regularly in these types of studies. Current study aims to present an overview of some of the typical statistical flaws, illustrated by a number of example studies which have been published in peer reviewed journals. Based on an investigation of eleven studies at random selected on fertility treatments with cryopreservation, it appeared that the methodological quality of these studies often did not fulfil the -required statistical criteria. The following statistical flaws were identified: flaws in study design, patient selection, and units of analysis or in the definition of the primary endpoints. Other errors could be found in p-value and power calculations or in critical p-value definitions. Proper -interpretation of the results and/or use of these study results in a meta analysis should therefore be conducted with care.
A broad-group cross-section library based on ENDF/B-VII.0 for fast neutron dosimetry Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpan, F.A.
2011-07-01
A new ENDF/B-VII.0-based coupled 44-neutron, 20-gamma-ray-group cross-section library was developed to investigate the latest evaluated nuclear data file (ENDF) ,in comparison to ENDF/B-VI.3 used in BUGLE-96, as well as to generate an objective-specific library. The objectives selected for this work consisted of dosimetry calculations for in-vessel and ex-vessel reactor locations, iron atom displacement calculations for reactor internals and pressure vessel, and {sup 58}Ni(n,{gamma}) calculation that is important for gas generation in the baffle plate. The new library was generated based on the contribution and point-wise cross-section-driven (CPXSD) methodology and was applied to one of the most widely used benchmarks, themore » Oak Ridge National Laboratory Pool Critical Assembly benchmark problem. In addition to the new library, BUGLE-96 and an ENDF/B-VII.0-based coupled 47-neutron, 20-gamma-ray-group cross-section library was generated and used with both SNLRML and IRDF dosimetry cross sections to compute reaction rates. All reaction rates computed by the multigroup libraries are within {+-} 20 % of measurement data and meet the U. S. Nuclear Regulatory Commission acceptance criterion for reactor vessel neutron exposure evaluations specified in Regulatory Guide 1.190. (authors)« less
NASA Astrophysics Data System (ADS)
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Methodology for extracting local constants from petroleum cracking flows
Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.
2000-01-01
A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.
Designing the Alluvial Riverbeds in Curved Paths
NASA Astrophysics Data System (ADS)
Macura, Viliam; Škrinár, Andrej; Štefunková, Zuzana; Muchová, Zlatica; Majorošová, Martina
2017-10-01
The paper presents the method of determining the shape of the riverbed in curves of the watercourse, which is based on the method of Ikeda (1975) developed for a slightly curved path in sandy riverbed. Regulated rivers have essentially slightly and smoothly curved paths; therefore, this methodology provides the appropriate basis for river restoration. Based on the research in the experimental reach of the Holeška Brook and several alluvial mountain streams the methodology was adjusted. The method also takes into account other important characteristics of bottom material - the shape and orientation of the particles, settling velocity and drag coefficients. Thus, the method is mainly meant for the natural sand-gravel material, which is heterogeneous and the particle shape of the bottom material is very different from spherical. The calculation of the river channel in the curved path provides the basis for the design of optimal habitat, but also for the design of foundations of armouring of the bankside of the channel. The input data is adapted to the conditions of design practice.
Harmonised pesticide risk trend indicator for food (HAPERITIF): The methodological approach.
Calliera, Maura; Finizio, Antonio; Azimonti, Giovanna; Benfenati, Emilio; Trevisan, Marco
2006-12-01
To provide a harmonised European approach for pesticide risk indicators, the Sixth EU Framework Programme recently financed the HAIR (HArmonised environmental Indicators for pesticide Risk) project. This paper illustrates the methodology underlying a new indicator-HAPERITIF (HArmonised PEsticide RIsk Trend Indicator for Food), developed in HAIR, for tracking acute and chronic pesticide risk trends for consumers. The acute indicator, HAPERITIF(ac), is based on the ratio between an estimated short-term intake (ESTI), calculated as recommended by the World Health Organisation (WHO), and the acute reference dose (ARfD); the chronic indicator HAPERITIF(chr) is based on the ratio between an estimated daily intake (EDI) and the admissible daily intake (ADI). HAPERITIF can be applied at different levels of aggregation. Each level gives information for proper risk management of pesticides to reduce the risk associated with food consumption. An example of application using realistic scenarios of pesticide treatments on a potato crop in central-northern Italy is reported to illustrate the different steps of HAPERITIF. Copyright 2006 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Avilova, I. P.; Krutilova, M. O.
2018-01-01
Economic growth is the main determinant of the trend to increased greenhouse gas (GHG) emission. Therefore, the reduction of emission and stabilization of GHG levels in the atmosphere become an urgent task to avoid the worst predicted consequences of climate change. GHG emissions in construction industry take a significant part of industrial GHG emission and are expected to consistently increase. The problem could be successfully solved with a help of both economical and organizational restrictions, based on enhanced algorithms of calculation and amercement of environmental harm in building industry. This study aims to quantify of GHG emission caused by different constructive schemes of RC framework in concrete casting. The result shows that proposed methodology allows to make a comparative analysis of alternative projects in residential housing, taking into account an environmental damage, caused by construction process. The study was carried out in the framework of the Program of flagship university development on the base of Belgorod State Technological University named after V.G. Shoukhov
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
Rocket-Based Combined Cycle Engine Technology Development: Inlet CFD Validation and Application
NASA Technical Reports Server (NTRS)
DeBonis, J. R.; Yungster, S.
1996-01-01
A CFD methodology has been developed for inlet analyses of Rocket-Based Combined Cycle (RBCC) Engines. A full Navier-Stokes analysis code, NPARC, was used in conjunction with pre- and post-processing tools to obtain a complete description of the flow field and integrated inlet performance. This methodology was developed and validated using results from a subscale test of the inlet to a RBCC 'Strut-Jet' engine performed in the NASA Lewis 1 x 1 ft. supersonic wind tunnel. Results obtained from this study include analyses at flight Mach numbers of 5 and 6 for super-critical operating conditions. These results showed excellent agreement with experimental data. The analysis tools were also used to obtain pre-test performance and operability predictions for the RBCC demonstrator engine planned for testing in the NASA Lewis Hypersonic Test Facility. This analysis calculated the baseline fuel-off internal force of the engine which is needed to determine the net thrust with fuel on.
Dosimetric calculations for uranium miners for epidemiological studies.
Marsh, J W; Blanchardon, E; Gregoratto, D; Hofmann, W; Karcher, K; Nosske, D; Tomásek, L
2012-05-01
Epidemiological studies on uranium miners are being carried out to quantify the risk of cancer based on organ dose calculations. Mathematical models have been applied to calculate the annual absorbed doses to regions of the lung, red bone marrow, liver, kidney and stomach for each individual miner arising from exposure to radon gas, radon progeny and long-lived radionuclides (LLR) present in the uranium ore dust and to external gamma radiation. The methodology and dosimetric models used to calculate these organ doses are described and the resulting doses for unit exposure to each source (radon gas, radon progeny and LLR) are presented. The results of dosimetric calculations for a typical German miner are also given. For this miner, the absorbed dose to the central regions of the lung is dominated by the dose arising from exposure to radon progeny, whereas the absorbed dose to the red bone marrow is dominated by the external gamma dose. The uncertainties in the absorbed dose to regions of the lung arising from unit exposure to radon progeny are also discussed. These dose estimates are being used in epidemiological studies of cancer in uranium miners.
NASA Astrophysics Data System (ADS)
Esteban Bedoya-Velásquez, Andrés; Navas-Guzmán, Francisco; José Granados-Muñoz, María; Titos, Gloria; Román, Roberto; Andrés Casquero-Vera, Juan; Ortiz-Amezcua, Pablo; Benavent-Oltra, Jose Antonio; de Arruda Moreira, Gregori; Montilla-Rosero, Elena; Hoyos, Carlos David; Artiñano, Begoña; Coz, Esther; José Olmo-Reyes, Francisco; Alados-Arboledas, Lucas; Guerrero-Rascado, Juan Luis
2018-05-01
This study focuses on the analysis of aerosol hygroscopic growth during the Sierra Nevada Lidar AerOsol Profiling Experiment (SLOPE I) campaign by using the synergy of active and passive remote sensors at the ACTRIS Granada station and in situ instrumentation at a mountain station (Sierra Nevada, SNS). To this end, a methodology based on simultaneous measurements of aerosol profiles from an EARLINET multi-wavelength Raman lidar (RL) and relative humidity (RH) profiles obtained from a multi-instrumental approach is used. This approach is based on the combination of calibrated water vapor mixing ratio (r) profiles from RL and continuous temperature profiles from a microwave radiometer (MWR) for obtaining RH profiles with a reasonable vertical and temporal resolution. This methodology is validated against the traditional one that uses RH from co-located radiosounding (RS) measurements, obtaining differences in the hygroscopic growth parameter (γ) lower than 5 % between the methodology based on RS and the one presented here. Additionally, during the SLOPE I campaign the remote sensing methodology used for aerosol hygroscopic growth studies has been checked against Mie calculations of aerosol hygroscopic growth using in situ measurements of particle number size distribution and submicron chemical composition measured at SNS. The hygroscopic case observed during SLOPE I showed an increase in the particle backscatter coefficient at 355 and 532 nm with relative humidity (RH ranged between 78 and 98 %), but also a decrease in the backscatter-related Ångström exponent (AE) and particle linear depolarization ratio (PLDR), indicating that the particles became larger and more spherical due to hygroscopic processes. Vertical and horizontal wind analysis is performed by means of a co-located Doppler lidar system, in order to evaluate the horizontal and vertical dynamics of the air masses. Finally, the Hänel parameterization is applied to experimental data for both stations, and we found good agreement on γ measured with remote sensing (γ532 = 0.48 ± 0.01 and γ355 = 0.40 ± 0.01) with respect to the values calculated using Mie theory (γ532 = 0.53 ± 0.02 and γ355 = 0.45 ± 0.02), with relative differences between measurements and simulations lower than 9 % at 532 nm and 11 % at 355 nm.
Jones, Cheryl Bland
2005-01-01
This is the second article in a 2-part series focusing on nurse turnover and its costs. Part 1 (December 2004) described nurse turnover costs within the context of human capital theory, and using human resource accounting methods, presented the updated Nursing Turnover Cost Calculation Methodology. Part 2 presents an application of this method in an acute care setting and the estimated costs of nurse turnover that were derived. Administrators and researchers can use these methods and cost information to build a business case for nurse retention.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Optimized Structures and Proton Affinities of Fluorinated Dimethyl Ethers: An Ab Initio Study
NASA Technical Reports Server (NTRS)
Orgel, Victoria B.; Ball, David W.; Zehe, Michael J.
1996-01-01
Ab initio methods have been used to investigate the proton affinity and the geometry changes upon protonation for the molecules (CH3)2O, (CH2F)2O, (CHF2)2O, and (CF3)2O. Geometry optimizations were performed at the MP2/3-2 I G level, and the resulting geometries were used for single-point energy MP2/6-31G calculations. The proton affinity calculated for (CH3)2O was 7 Kjoule/mole from the experimental value, within the desired variance of +/- 8Kjoule/mole for G2 theory, suggesting that the methodology used in this study is adequate for energy difference considerations. For (CF3)20, the calculated proton affinity of 602 Kjoule/mole suggests that perfluorinated ether molecules do not act as Lewis bases under normal circumstances; e.g. degradation of commercial lubricants in tribological applications.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
Reporting and methodological quality of meta-analyses in urological literature.
Xia, Leilei; Xu, Jing; Guzzo, Thomas J
2017-01-01
To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, " a priori " design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and " a priori " design were associated with superior reporting quality, following PRISMA guideline and " a priori " design were associated with superior methodological quality. Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having " a priori " protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dokhane, A.; Canepa, S.; Ferroukhi, H.
For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less
Terwee, Caroline B; Mokkink, Lidwine B; Knol, Dirk L; Ostelo, Raymond W J G; Bouter, Lex M; de Vet, Henrica C W
2012-05-01
The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties. The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box ("worst score counts"). Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the "worst score counts" algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties. Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Chen, S
Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data accessmore » and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D program produce reasonably accurate MU calculations for electron-beam therapy.« less
Statistical analysis of radiation dose derived from ingestion of foods
NASA Astrophysics Data System (ADS)
Dougherty, Ward L.
2001-09-01
This analysis undertook the task of designing and implementing a methodology to determine an individual's probabilistic radiation dose from ingestion of foods utilizing Crystal Ball. A dietary intake model was determined by comparing previous existing models. Two principal radionuclides were considered-Lead210 (Pb-210) and Radium 226 (Ra-226). Samples from three different local grocery stores-Publix, Winn Dixie, and Albertsons-were counted on a gamma spectroscopy system with a GeLi detector. The same food samples were considered as those in the original FIPR database. A statistical analysis, utilizing the Crystal Ball program, was performed on the data to assess the most accurate distribution to use for these data. This allowed a determination of a radiation dose to an individual based on the above-information collected. Based on the analyses performed, radiation dose for grocery store samples was lower for Radium-226 than FIPR debris analyses, 2.7 vs. 5.91 mrem/yr. Lead-210 had a higher dose in the grocery store sample than the FIPR debris analyses, 21.4 vs. 518 mrem/yr. The output radiation dose was higher for all evaluations when an accurate estimation of distributions for each value was considered. Radium-226 radiation dose for FIPR and grocery rose to 9.56 and 4.38 mrem/yr. Radiation dose from ingestion of Pb-210 rose to 34.7 and 854 mrem/yr for FIPR and grocery data, respectively. Lead-210 was higher than initial doses for many reasons: Different peak examined, lower edge of detection limit, and minimum detectable concentration was considered. FIPR did not utilize grocery samples as a control because they calculated radiation dose that appeared unreasonably high. Consideration of distributions with the initial values allowed reevaluation of radiation does and showed a significant difference to original deterministic values. This work shows the value and importance of considering distributions to ensure that a person's radiation dose is accurately calculated. Probabilistic dose methodology was proved to be a more accurate and realistic method of radiation dose determination. This type of methodology provides a visual presentation of dose distribution that can be a vital aid in risk methodology.
Shrivastava, Manisha; Shah, Nehal; Navaid, Seema
2018-01-01
In an era of evidence based medicine research is an essential part of medical profession whether clinical or academic. A research methodology workshop intends to help participants, those who are newer to research field or those who are already doing empirical research. The present study was conducted to assess the changes in knowledge of the participants of a research methodology workshop through a structured questionnaire. With administrative and ethical approval, a four day research methodology workshop was planned. The participants were subjected to a structured questionnaire (pre-test) containing 20 multiple choice questions (Q1-Q 20) related to the topics to be covered in research methodology workshop before the commencement of the workshop and then subjected to similar posttest questionnaire after the completion of workshop. The mean values of pre and post-test scores were calculated and the results were analyzed and compared. Out of the total 153 delegates, 45(29 %) were males and 108 were (71 %) females. 92 (60%) participants consented to fill the pre-test questionnaire and 68 (44%) filled the post-test questionnaire. The mean Pre-test and post-test scores at 95% Confidence Interval were 07.62 (SD ±3.220) and 09.66 (SD ±2.477) respectively. The differences were found to be significant using Paired Sample T test ( P <0.003). There was increase in knowledge of the delegates after attending research methodology workshops. Participatory research methodology workshops are good methods of imparting knowledge, also the long term effects needs to be evaluated.
Calculation of Dynamic Loads Due to Random Vibration Environments in Rocket Engine Systems
NASA Technical Reports Server (NTRS)
Christensen, Eric R.; Brown, Andrew M.; Frady, Greg P.
2007-01-01
An important part of rocket engine design is the calculation of random dynamic loads resulting from internal engine "self-induced" sources. These loads are random in nature and can greatly influence the weight of many engine components. Several methodologies for calculating random loads are discussed and then compared to test results using a dynamic testbed consisting of a 60K thrust engine. The engine was tested in a free-free condition with known random force inputs from shakers attached to three locations near the main noise sources on the engine. Accelerations and strains were measured at several critical locations on the engines and then compared to the analytical results using two different random response methodologies.
Spectral radiation analyses of the GOES solar illuminated hexagonal cell scan mirror back
NASA Technical Reports Server (NTRS)
Fantano, Louis G.
1993-01-01
A ray tracing analytical tool has been developed for the simulation of spectral radiation exchange in complex systems. Algorithms are used to account for heat source spectral energy, surface directional radiation properties, and surface spectral absorptivity properties. This tool has been used to calculate the effective solar absorptivity of the geostationary operational environmental satellites (GOES) scan mirror in the calibration position. The development and design of Sounder and Imager instruments on board GOES is reviewed and the problem of calculating the effective solar absorptivity associated with the GOES hexagonal cell configuration is presented. The analytical methodology based on the Monte Carlo ray tracing technique is described and results are presented and verified by experimental measurements for selected solar incidence angles.
NASA Astrophysics Data System (ADS)
Polichtchouk, Yuri; Tokareva, Olga; Bulgakova, Irina V.
2003-03-01
Methodical problems of space images processing for assessment of atmosphere pollution impact on forest ecosystems using geoinformation systems are developed. An approach to quantitative assessment of atmosphere pollution impact on forest ecosystems is based on calculating relative squares of forest landscapes which are inside atmosphere pollution zones. Landscape structure of forested territories in the southern part of Western Siberia are determined on the basis of procession of middle resolution space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches on territories of oil fields are considered. Pollution zones were revealed by modeling of contaminants dispersal in atmosphere with standard models. Polluted landscapes squares are calculated depending on atmosphere pollution level.
NASA Astrophysics Data System (ADS)
Geslin, Pierre-Antoine; Gatti, Riccardo; Devincre, Benoit; Rodney, David
2017-11-01
We propose a framework to study thermally-activated processes in dislocation glide. This approach is based on an implementation of the nudged elastic band method in a nodal mesoscale dislocation dynamics formalism. Special care is paid to develop a variational formulation to ensure convergence to well-defined minimum energy paths. We also propose a methodology to rigorously parametrize the model on atomistic data, including elastic, core and stacking fault contributions. To assess the validity of the model, we investigate the homogeneous nucleation of partial dislocation loops in aluminum, recovering the activation energies and loop shapes obtained with atomistic calculations and extending these calculations to lower applied stresses. The present method is also applied to heterogeneous nucleation on spherical inclusions.
Adamska, K; Bellinghausen, R; Voelkel, A
2008-06-27
The Hansen solubility parameter (HSP) seems to be a useful tool for the thermodynamic characterization of different materials. Unfortunately, estimation of the HSP values can cause some problems. In this work different procedures by using inverse gas chromatography have been presented for calculation of pharmaceutical excipients' solubility parameter. The new procedure proposed, based on the Lindvig et al. methodology, where experimental data of Flory-Huggins interaction parameter are used, can be a reasonable alternative for the estimation of HSP values. The advantage of this method is that the values of Flory-Huggins interaction parameter chi for all test solutes are used for further calculation, thus diverse interactions between test solute and material are taken into consideration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel Carisle; Griffin, John Clark
2015-01-01
The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.
2008-01-01
A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Prioritising and planning of urban stormwater treatment in the Alna watercourse in Oslo.
Nordeidet, B; Nordeide, T; Astebøl, S O; Hvitved-Jacobsen, T
2004-12-01
The Oslo municipal Water and Sewage Works (VAV) intends to improve the water quality in the Alna watercourse, in particular, with regards to the biological diversity. In order to reduce existing discharges of polluted urban stormwater, a study has been carried out to rank subcatchment areas in descending order of magnitude and to assess possible measures. An overall ranking methodology was developed in order to identify and select the most suitable subcatchment areas for further assessment studies (74 subcatchment/drainage areas). The municipality's comprehensive geographical information system (GIS) was applied as a base for the ranking. A weighted ranking based on three selected parameters was chosen from several major influencing factors, namely total yearly discharge (kg pollution/year), specific pollution discharge (kg/area/year) and existing stormwater system (pipe lengths/area). Results show that the highest 15 ranked catchment areas accounted for 70% of the total calculated pollution load of heavy metals. The highest ranked areas are strongly influenced by three major highways. Based on the results from similar field studies, it would be possible to remove 75-85% of total solids and about 50-80% of heavy metals using wet detention ponds as Best Available Technology (BAT). Based on the final ranking, two subcatchment areas were selected for further practical assessment of possible measures. VAV plans to use wet detention ponds, in combination with other measures when relevant, to treat the urban runoff. Using calculated loading and aerial photographs (all done in the same GIS environment), a preliminary sketch design and location of ponds were performed. The resulting GIS methodology for urban stormwater management will be used as input to a holistic and long-term planning process for the management of the watercourse, taking into account future urban development and other pollution sources.
Embedded control system for computerized franking machine
NASA Astrophysics Data System (ADS)
Shi, W. M.; Zhang, L. B.; Xu, F.; Zhan, H. W.
2007-12-01
This paper presents a novel control system for franking machine. A methodology for operating a franking machine using the functional controls consisting of connection, configuration and franking electromechanical drive is studied. A set of enabling technologies to synthesize postage management software architectures driven microprocessor-based embedded systems is proposed. The cryptographic algorithm that calculates mail items is analyzed to enhance the postal indicia accountability and security. The study indicated that the franking machine is reliability, performance and flexibility in printing mail items.
A Cost Model for Testing Unmanned and Autonomous Systems of Systems
2011-02-01
those risks. In addition, the fundamental methods presented by Aranha and Borba to include the complexity and sizing of tests for UASoS, can be expanded...used as an input for test execution effort estimation models (Aranha & Borba , 2007). Such methodology is very relevant to this work because as a UASoS...calculate the test effort based on the complexity of the SoS. However, Aranha and Borba define test size as the number of steps required to complete
Abortion and mental health: quantitative synthesis and analysis of research published 1995-2009.
Coleman, Priscilla K
2011-09-01
Given the methodological limitations of recently published qualitative reviews of abortion and mental health, a quantitative synthesis was deemed necessary to represent more accurately the published literature and to provide clarity to clinicians. To measure the association between abortion and indicators of adverse mental health, with subgroup effects calculated based on comparison groups (no abortion, unintended pregnancy delivered, pregnancy delivered) and particular outcomes. A secondary objective was to calculate population-attributable risk (PAR) statistics for each outcome. After the application of methodologically based selection criteria and extraction rules to minimise bias, the sample comprised 22 studies, 36 measures of effect and 877 181 participants (163 831 experienced an abortion). Random effects pooled odds ratios were computed using adjusted odds ratios from the original studies and PAR statistics were derived from the pooled odds ratios. Women who had undergone an abortion experienced an 81% increased risk of mental health problems, and nearly 10% of the incidence of mental health problems was shown to be attributable to abortion. The strongest subgroup estimates of increased risk occurred when abortion was compared with term pregnancy and when the outcomes pertained to substance use and suicidal behaviour. This review offers the largest quantitative estimate of mental health risks associated with abortion available in the world literature. Calling into question the conclusions from traditional reviews, the results revealed a moderate to highly increased risk of mental health problems after abortion. Consistent with the tenets of evidence-based medicine, this information should inform the delivery of abortion services.
Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2013-01-01
Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives
ERIC Educational Resources Information Center
Watson, Silvana Maria R.; Lopes, João; Oliveira, Célia; Judge, Sharon
2018-01-01
Purpose: The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks. Design/methodology/approach: The researchers have examined error types in addition and subtraction calculation made by 697 Portuguese students in elementary grades. Each student…
Quantum chemical approaches in structure-based virtual screening and lead optimization
NASA Astrophysics Data System (ADS)
Cavasotto, Claudio N.; Adler, Natalia S.; Aucar, Maria G.
2018-05-01
Today computational chemistry is a consolidated tool in drug lead discovery endeavors. Due to methodological developments and to the enormous advance in computer hardware, methods based on quantum mechanics (QM) have gained great attention in the last 10 years, and calculations on biomacromolecules are becoming increasingly explored, aiming to provide better accuracy in the description of protein-ligand interactions and the prediction of binding affinities. In principle, the QM formulation includes all contributions to the energy, accounting for terms usually missing in molecular mechanics force-fields, such as electronic polarization effects, metal coordination, and covalent binding; moreover, QM methods are systematically improvable, and provide a greater degree of transferability. In this mini-review we present recent applications of explicit QM-based methods in small-molecule docking and scoring, and in the calculation of binding free-energy in protein-ligand systems. Although the routine use of QM-based approaches in an industrial drug lead discovery setting remains a formidable challenging task, it is likely they will increasingly become active players within the drug discovery pipeline.
PDF-based heterogeneous multiscale filtration model.
Gong, Jian; Rutland, Christopher J
2015-04-21
Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.
Trajectory Correction and Locomotion Analysis of a Hexapod Walking Robot with Semi-Round Rigid Feet
Zhu, Yaguang; Jin, Bo; Wu, Yongsheng; Guo, Tong; Zhao, Xiangmo
2016-01-01
Aimed at solving the misplaced body trajectory problem caused by the rolling of semi-round rigid feet when a robot is walking, a legged kinematic trajectory correction methodology based on the Least Squares Support Vector Machine (LS-SVM) is proposed. The concept of ideal foothold is put forward for the three-dimensional kinematic model modification of a robot leg, and the deviation value between the ideal foothold and real foothold is analyzed. The forward/inverse kinematic solutions between the ideal foothold and joint angular vectors are formulated and the problem of direct/inverse kinematic nonlinear mapping is solved by using the LS-SVM. Compared with the previous approximation method, this correction methodology has better accuracy and faster calculation speed with regards to inverse kinematics solutions. Experiments on a leg platform and a hexapod walking robot are conducted with multi-sensors for the analysis of foot tip trajectory, base joint vibration, contact force impact, direction deviation, and power consumption, respectively. The comparative analysis shows that the trajectory correction methodology can effectively correct the joint trajectory, thus eliminating the contact force influence of semi-round rigid feet, significantly improving the locomotion of the walking robot and reducing the total power consumption of the system. PMID:27589766
Bao, Yihai; Main, Joseph A.; Noh, Sam-Young
2017-01-01
A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness. PMID:28890599
NASA Astrophysics Data System (ADS)
García-Florentino, Cristina; Maguregui, Maite; Marguí, Eva; Torrent, Laura; Queralt, Ignasi; Madariaga, Juan Manuel
2018-05-01
In this work, a Total Reflection X-ray fluorescence (TXRF) spectrometry based quantitative methodology for elemental characterization of liquid extracts and solids belonging to old building materials and their degradation products from a building of the beginning of 20th century with a high historic cultural value in Getxo, (Basque Country, North of Spain) is proposed. This quantification strategy can be considered a faster methodology comparing to traditional Energy or Wavelength Dispersive X-ray fluorescence (ED-XRF and WD-XRF) spectrometry based methodologies or other techniques such as Inductively Coupled Plasma Mass Spectrometry (ICP-MS). In particular, two kinds of liquid extracts were analysed: (i) water soluble extracts from different mortars and (ii) acid extracts from mortars, black crusts, and calcium carbonate formations. In order to try to avoid the acid extraction step of the materials and their degradation products, it was also studied the TXRF direct measurement of the powdered solid suspensions in water. With this aim, different parameters such as the deposition volume and the measuring time were studied for each kind of samples. Depending on the quantified element, the limits of detection achieved with the TXRF quantitative methodologies for liquid extracts and solids were set around 0.01-1.2 and 2-200 mg/L respectively. The quantification of K, Ca, Ti, Mn, Fe, Zn, Rb, Sr, Sn and Pb in the liquid extracts was proved to be a faster alternative to other more classic quantification techniques (i.e. ICP-MS), accurate enough to obtain information about the composition of the acidic soluble part of the materials and their degradation products. Regarding the solid samples measured as suspensions, it was quite difficult to obtain stable and repetitive suspensions affecting in this way the accuracy of the results. To cope with this problem, correction factors based on the quantitative results obtained using ED-XRF were calculated to improve the accuracy of the TXRF results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Eric Robert
The included methodology, calculations, and drawings support design of Carbon Fiber Reinforced Polymer (CFRP) spike anchors for securing U-wrap CFRP onto reinforced concrete Tbeams. This content pertains to an installation in one of Los Alamos National Laboratory’s facilities. The anchors are part of a seismic rehabilitation to the subject facility. The information contained here is for information purposes only. The reader is encouraged to verify all equations, details, and methodology prior to usage in future projects. However, development of the content contained here complied with Los Alamos National Laboratory’s NQA-1 quality assurance program for nuclear structures. Furthermore, the formulations andmore » details came from the referenced published literature. This literature represents the current state of the art for FRP anchor design. Construction personnel tested the subject anchor design to the required demand level demonstrated in the calculation. The testing demonstrated the ability of the anchors noted to carry loads in excess of 15 kips in direct tension. The anchors were not tested to failure in part because of the hazards associated with testing large-capacity tensile systems to failure. The calculation, methodology, and drawing originator was Eric MacFarlane of Los Alamos National Laboratory’s (LANL) Office of Seismic Hazards and Risk Mitigation (OSHRM). The checker for all components was Mike Salmon of the LANL OSHRM. The independent reviewers of all components were Insung Kim and Loring Wyllie of Degenkolb Engineers. Note that Insung Kim contributed to the initial formulations in the calculations that pertained directly to his Doctoral research.« less
Materials Selection Criteria for Nuclear Power Applications: A Decision Algorithm
NASA Astrophysics Data System (ADS)
Rodríguez-Prieto, Álvaro; Camacho, Ana María; Sebastián, Miguel Ángel
2016-02-01
An innovative methodology based on stringency levels is proposed in this paper and improves the current selection method for structural materials used in demanding industrial applications. This paper describes a new approach for quantifying the stringency of materials requirements based on a novel deterministic algorithm to prevent potential failures. We have applied the new methodology to different standardized specifications used in pressure vessels design, such as SA-533 Grade B Cl.1, SA-508 Cl.3 (issued by the American Society of Mechanical Engineers), DIN 20MnMoNi55 (issued by the German Institute of Standardization) and 16MND5 (issued by the French Nuclear Commission) specifications and determine the influence of design code selection. This study is based on key scientific publications on the influence of chemical composition on the mechanical behavior of materials, which were not considered when the technological requirements were established in the aforementioned specifications. For this purpose, a new method to quantify the efficacy of each standard has been developed using a deterministic algorithm. The process of assigning relative weights was performed by consulting a panel of experts in materials selection for reactor pressure vessels to provide a more objective methodology; thus, the resulting mathematical calculations for quantitative analysis are greatly simplified. The final results show that steel DIN 20MnMoNi55 is the best material option. Additionally, more recently developed materials such as DIN 20MnMoNi55, 16MND5 and SA-508 Cl.3 exhibit mechanical requirements more stringent than SA-533 Grade B Cl.1. The methodology presented in this paper can be used as a decision tool in selection of materials for a wide range of applications.
Variations in cost calculations in spine surgery cost-effectiveness research.
Alvin, Matthew D; Miller, Jacob A; Lubelski, Daniel; Rosenbaum, Benjamin P; Abdullah, Kalil G; Whitmore, Robert G; Benzel, Edward C; Mroz, Thomas E
2014-06-01
Cost-effectiveness research in spine surgery has been a prominent focus over the last decade. However, there has yet to be a standardized method developed for calculation of costs in such studies. This lack of a standardized costing methodology may lead to conflicting conclusions on the cost-effectiveness of an intervention for a specific diagnosis. The primary objective of this study was to systematically review all cost-effectiveness studies published on spine surgery and compare and contrast various costing methodologies used. The authors performed a systematic review of the cost-effectiveness literature related to spine surgery. All cost-effectiveness analyses pertaining to spine surgery were identified using the cost-effectiveness analysis registry database of the Tufts Medical Center Institute for Clinical Research and Health Policy, and the MEDLINE database. Each article was reviewed to determine the study subject, methodology, and results. Data were collected from each study, including costs, interventions, cost calculation method, perspective of cost calculation, and definitions of direct and indirect costs if available. Thirty-seven cost-effectiveness studies on spine surgery were included in the present study. Twenty-seven (73%) of the studies involved the lumbar spine and the remaining 10 (27%) involved the cervical spine. Of the 37 studies, 13 (35%) used Medicare reimbursements, 12 (32%) used a case-costing database, 3 (8%) used cost-to-charge ratios (CCRs), 2 (5%) used a combination of Medicare reimbursements and CCRs, 3 (8%) used the United Kingdom National Health Service reimbursement system, 2 (5%) used a Dutch reimbursement system, 1 (3%) used the United Kingdom Department of Health data, and 1 (3%) used the Tricare Military Reimbursement system. Nineteen (51%) studies completed their cost analysis from the societal perspective, 11 (30%) from the hospital perspective, and 7 (19%) from the payer perspective. Of those studies with a societal perspective, 14 (38%) reported actual indirect costs. Changes in cost have a direct impact on the value equation for concluding whether an intervention is cost-effective. It is essential to develop a standardized, accurate means of calculating costs. Comparability and transparency are essential, such that studies can be compared properly and policy makers can be appropriately informed when making decisions for our health care system based on the results of these studies.
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...
Code of Federal Regulations, 2014 CFR
2014-01-01
... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...
Code of Federal Regulations, 2012 CFR
2012-01-01
... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...
Code of Federal Regulations, 2013 CFR
2013-01-01
... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...
NASA Astrophysics Data System (ADS)
Shultheis, C. F.
1985-02-01
This technical report describes an analysis of the performance allocations for a satellite link, focusing specifically on a single-hop 7 to 8 GHz link of the Defense Satellite Communications System (DSCS). The analysis is performed for three primary reasons: (1) to reevaluate link power margin requirements for DSCS links based on digital signalling; (2) to analyze the implications of satellite availability and error rate allocations contained in proposed MIL-STD-188-323, system design and engineering standards for long haul digital transmission system performance; and (3) to standardize a methodology for determination of rain-related propagation constraints. The aforementioned methodology is then used to calculate the link margin requirements of typical DSCS binary/quaternary phase shift keying (BPSK/QPSK) links at 7 to 8 GHz for several different Earth terminal locations.
Yoshioka, Akio; Fukuzawa, Kaori; Mochizuki, Yuji; Yamashita, Katsumi; Nakano, Tatsuya; Okiyama, Yoshio; Nobusawa, Eri; Nakajima, Katsuhisa; Tanaka, Shigenori
2011-09-01
Ab initio electronic-state calculations for influenza virus hemagglutinin (HA) trimer complexed with Fab antibody were performed on the basis of the fragment molecular orbital (FMO) method at the second and third-order Møller-Plesset (MP2 and MP3) perturbation levels. For the protein complex containing 2351 residues and 36,160 atoms, the inter-fragment interaction energies (IFIEs) were evaluated to illustrate the effective interactions between all the pairs of amino acid residues. By analyzing the calculated data on the IFIEs, we first discussed the interactions and their fluctuations between multiple domains contained in the trimer complex. Next, by combining the IFIE data between the Fab antibody and each residue in the HA antigen with experimental data on the hemadsorption activity of HA mutants, we proposed a protocol to predict probable mutations in HA. The proposed protocol based on the FMO-MP2.5 calculation can explain the historical facts concerning the actual mutations after the emergence of A/Hong Kong/1/68 influenza virus with subtype H3N2, and thus provides a useful methodology to enumerate those residue sites likely to mutate in the future. Copyright © 2011 Elsevier Inc. All rights reserved.
Application of Fuzzy Logic to Matrix FMECA
NASA Astrophysics Data System (ADS)
Shankar, N. Ravi; Prabhu, B. S.
2001-04-01
A methodology combining the benefits of Fuzzy Logic and Matrix FMEA is presented in this paper. The presented methodology extends the risk prioritization beyond the conventional Risk Priority Number (RPN) method. Fuzzy logic is used to calculate the criticality rank. Also the matrix approach is improved further to develop a pictorial representation retaining all relevant qualitative and quantitative information of several FMEA elements relationships. The methodology presented is demonstrated by application to an illustrative example.
DB4US: A Decision Support System for Laboratory Information Management.
Carmona-Cejudo, José M; Hortas, Maria Luisa; Baena-García, Manuel; Lana-Linati, Jorge; González, Carlos; Redondo, Maximino; Morales-Bueno, Rafael
2012-11-14
Until recently, laboratory automation has focused primarily on improving hardware. Future advances are concentrated on intelligent software since laboratories performing clinical diagnostic testing require improved information systems to address their data processing needs. In this paper, we propose DB4US, an application that automates information related to laboratory quality indicators information. Currently, there is a lack of ready-to-use management quality measures. This application addresses this deficiency through the extraction, consolidation, statistical analysis, and visualization of data related to the use of demographics, reagents, and turn-around times. The design and implementation issues, as well as the technologies used for the implementation of this system, are discussed in this paper. To develop a general methodology that integrates the computation of ready-to-use management quality measures and a dashboard to easily analyze the overall performance of a laboratory, as well as automatically detect anomalies or errors. The novelty of our approach lies in the application of integrated web-based dashboards as an information management system in hospital laboratories. We propose a new methodology for laboratory information management based on the extraction, consolidation, statistical analysis, and visualization of data related to demographics, reagents, and turn-around times, offering a dashboard-like user web interface to the laboratory manager. The methodology comprises a unified data warehouse that stores and consolidates multidimensional data from different data sources. The methodology is illustrated through the implementation and validation of DB4US, a novel web application based on this methodology that constructs an interface to obtain ready-to-use indicators, and offers the possibility to drill down from high-level metrics to more detailed summaries. The offered indicators are calculated beforehand so that they are ready to use when the user needs them. The design is based on a set of different parallel processes to precalculate indicators. The application displays information related to tests, requests, samples, and turn-around times. The dashboard is designed to show the set of indicators on a single screen. DB4US was deployed for the first time in the Hospital Costa del Sol in 2008. In our evaluation we show the positive impact of this methodology for laboratory professionals, since the use of our application has reduced the time needed for the elaboration of the different statistical indicators and has also provided information that has been used to optimize the usage of laboratory resources by the discovery of anomalies in the indicators. DB4US users benefit from Internet-based communication of results, since this information is available from any computer without having to install any additional software. The proposed methodology and the accompanying web application, DB4US, automates the processing of information related to laboratory quality indicators and offers a novel approach for managing laboratory-related information, benefiting from an Internet-based communication mechanism. The application of this methodology has been shown to improve the usage of time, as well as other laboratory resources.
Qualification of APOLLO2 BWR calculation scheme on the BASALA mock-up
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaglio-Gaudard, C.; Santamarina, A.; Sargeni, A.
2006-07-01
A new neutronic APOLLO2/MOC/SHEM/CEA2005 calculation scheme for BWR applications has been developed by the French 'Commissariat a l'Energie Atomique'. This scheme is based on the latest calculation methodology (accurate mutual and self-shielding formalism, MOC treatment of the transport equation) and the recent JEFF3.1 nuclear data library. This paper presents the experimental validation of this new calculation scheme on the BASALA BWR mock-up The BASALA programme is devoted to the measurements of the physical parameters of high moderation 100% MOX BWR cores, in hot and cold conditions. The experimental validation of the calculation scheme deals with core reactivity, fission rate maps,more » reactivity worth of void and absorbers (cruciform control blades and Gd pins), as well as temperature coefficient. Results of the analysis using APOLLO2/MOC/SHEM/CEA2005 show an overestimation of the core reactivity by 600 pcm for BASALA-Hot and 750 pcm for BASALA-Cold. Reactivity worth of gadolinium poison pins and hafnium or B{sub 4}C control blades are predicted by APOLLO2 calculation within 2% accuracy. Furthermore, the radial power map is well predicted for every core configuration, including Void configuration and Hf / B{sub 4}C configurations: fission rates in the central assembly are calculated within the {+-}2% experimental uncertainty for the reference cores. The C/E bias on the isothermal Moderator Temperature Coefficient, using the CEA2005 library based on JEFF3.1 file, amounts to -1.7{+-}03 pcm/ deg. C on the range 10 deg. C-80 deg. C. (authors)« less
42 CFR 416.171 - Determination of payment rates for ASC services.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Determination of payment rates for ASC services... Determination of payment rates for ASC services. (a) Standard methodology. The standard methodology for determining the national unadjusted payment rate for ASC services is to calculate the product of the...
We present a robust methodology for examining the relationship between synoptic-scale atmospheric transport patterns and pollutant concentration levels observed at a site. Our approach entails calculating a large number of back-trajectories from the observational site over a long...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... non-dumped comparisons. Several World Trade Organization (``WTO'') dispute settlement reports have... methodologies have been challenged as being inconsistent with the World Trade Organization (``WTO'') General... comparisons in reviews in a manner that parallels the WTO-consistent methodology the Department currently...
Methodologies for extracting kinetic constants for multiphase reacting flow simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S.L.; Lottes, S.A.; Golchert, B.
1997-03-01
Flows in industrial reactors often involve complex reactions of many species. A computational fluid dynamics (CFD) computer code, ICRKFLO, was developed to simulate multiphase, multi-species reacting flows. The ICRKFLO uses a hybrid technique to calculate species concentration and reaction for a large number of species in a reacting flow. This technique includes a hydrodynamic and reacting flow simulation with a small but sufficient number of lumped reactions to compute flow field properties followed by a calculation of local reaction kinetics and transport of many subspecies (order of 10 to 100). Kinetic rate constants of the numerous subspecies chemical reactions aremore » difficult to determine. A methodology has been developed to extract kinetic constants from experimental data efficiently. A flow simulation of a fluid catalytic cracking (FCC) riser was successfully used to demonstrate this methodology.« less
Parametric Criticality Safety Calculations for Arrays of TRU Waste Containers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gough, Sean T.
The Nuclear Criticality Safety Division (NCSD) has performed criticality safety calculations for finite and infinite arrays of transuranic (TRU) waste containers. The results of these analyses may be applied in any technical area onsite (e.g., TA-54, TA-55, etc.), as long as the assumptions herein are met. These calculations are designed to update the existing reference calculations for waste arrays documented in Reference 1, in order to meet current guidance on calculational methodology.
do Amaral, Leonardo L.; Pavoni, Juliana F.; Sampaio, Francisco; Netto, Thomaz Ghilardi
2015-01-01
Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source‐to‐detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were −1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N‐ PMID:26699306
Implementation and adaptation of a macro-scale methodology to calculate direct economic losses
NASA Astrophysics Data System (ADS)
Natho, Stephanie; Thieken, Annegret
2017-04-01
As one of the 195 member countries of the United Nations, Germany signed the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR). With this, though voluntary and non-binding, Germany agreed to report on achievements to reduce disaster impacts. Among other targets, the SFDRR aims at reducing direct economic losses in relation to the global gross domestic product by 2030 - but how to measure this without a standardized approach? The United Nations Office for Disaster Risk Reduction (UNISDR) has hence proposed a methodology to estimate direct economic losses per event and country on the basis of the number of damaged or destroyed items in different sectors. The method bases on experiences from developing countries. However, its applicability in industrial countries has not been investigated so far. Therefore, this study presents the first implementation of this approach in Germany to test its applicability for the costliest natural hazards and suggests adaptations. The approach proposed by UNISDR considers assets in the sectors agriculture, industry, commerce, housing, and infrastructure by considering roads, medical and educational facilities. The asset values are estimated on the basis of sector and event specific number of affected items, sector specific mean sizes per item, their standardized construction costs per square meter and a loss ratio of 25%. The methodology was tested for the three costliest natural hazard types in Germany, i.e. floods, storms and hail storms, considering 13 case studies on the federal or state scale between 1984 and 2016. Not any complete calculation of all sectors necessary to describe the total direct economic loss was possible due to incomplete documentation. Therefore, the method was tested sector-wise. Three new modules were developed to better adapt this methodology to German conditions covering private transport (cars), forestry and paved roads. Unpaved roads in contrast were integrated into the agricultural and forestry sector. Furthermore overheads are proposed to include costs of housing content as well as the overall costs of public infrastructure, one of the most important damage sectors. All constants considering sector specific mean sizes or construction costs were adapted. Loss ratios were adapted for each event. Whereas the original UNISDR method over- und underestimates the losses of the tested events, the adapted method is able to calculate losses in good accordance for river floods, hail storms and storms. For example, for the 2013-flood economic losses of EUR 6.3 billion were calculated (UNISDR EUR 0.85 billion, documentation EUR 11 billion). For the hail storms in 2013 the calculated EUR 3.6 billion overestimate the documented losses of EUR 2.7 billion less than the original UNISDR approach with EUR 5.2 billion. Only for flash floods, where public infrastructure can account for more than 90% of total losses, the method is absolutely not applicable. The adapted methodology serves as a good starting point for macro-scale loss estimations by accounting for the most important damage sectors. By implementing this approach into damage and event documentation and reporting standards, a consistent monitoring according to the SFDRR could be achieved.
Edwards, Mervyn; Nathanson, Andrew; Carroll, Jolyon; Wisch, Marcus; Zander, Oliver; Lubbe, Nils
2015-01-01
Autonomous emergency braking (AEB) systems fitted to cars for pedestrians have been predicted to offer substantial benefit. On this basis, consumer rating programs-for example, the European New Car Assessment Programme (Euro NCAP)-are developing rating schemes to encourage fitment of these systems. One of the questions that needs to be answered to do this fully is how the assessment of the speed reduction offered by the AEB is integrated with the current assessment of the passive safety for mitigation of pedestrian injury. Ideally, this should be done on a benefit-related basis. The objective of this research was to develop a benefit-based methodology for assessment of integrated pedestrian protection systems with AEB and passive safety components. The method should include weighting procedures to ensure that it represents injury patterns from accident data and replicates an independently estimated benefit of AEB. A methodology has been developed to calculate the expected societal cost of pedestrian injuries, assuming that all pedestrians in the target population (i.e., pedestrians impacted by the front of a passenger car) are impacted by the car being assessed, taking into account the impact speed reduction offered by the car's AEB (if fitted) and the passive safety protection offered by the car's frontal structure. For rating purposes, the cost for the assessed car is normalized by comparing it to the cost calculated for a reference car. The speed reductions measured in AEB tests are used to determine the speed at which each pedestrian in the target population will be impacted. Injury probabilities for each impact are then calculated using the results from Euro NCAP pedestrian impactor tests and injury risk curves. These injury probabilities are converted into cost using "harm"-type costs for the body regions tested. These costs are weighted and summed. Weighting factors were determined using accident data from Germany and Great Britain and an independently estimated AEB benefit. German and Great Britain versions of the methodology are available. The methodology was used to assess cars with good, average, and poor Euro NCAP pedestrian ratings, in combination with a current AEB system. The fitment of a hypothetical A-pillar airbag was also investigated. It was found that the decrease in casualty injury cost achieved by fitting an AEB system was approximately equivalent to that achieved by increasing the passive safety rating from poor to average. Because the assessment was influenced strongly by the level of head protection offered in the scuttle and windscreen area, a hypothetical A-pillar airbag showed high potential to reduce overall casualty cost. A benefit-based methodology for assessment of integrated pedestrian protection systems with AEB has been developed and tested. It uses input from AEB tests and Euro NCAP passive safety tests to give an integrated assessment of the system performance, which includes consideration of effects such as the change in head impact location caused by the impact speed reduction given by the AEB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurens, Lieve M; Olstad-Thompson, Jessica L; Templeton, David W
Accurately determining protein content is important in the valorization of algal biomass in food, feed, and fuel markets, where these values are used for component balance calculations. Conversion of elemental nitrogen to protein is a well-accepted and widely practiced method, but depends on developing an applicable nitrogen-to-protein conversion factor. The methodology reported here covers the quantitative assessment of the total nitrogen content of algal biomass and a description of the methodology that underpins the accurate de novo calculation of a dedicated nitrogen-to-protein conversion factor.
The application of ab initio calculations to molecular spectroscopy
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.
1989-01-01
The state of the art in ab initio molecular structure calculations is reviewed with an emphasis on recent developments, such as full configuration-interaction benchmark calculations and atomic natural orbital basis sets. It is found that new developments in methodology, combined with improvements in computer hardware, are leading to unprecedented accuracy in solving problems in spectroscopy.
The application of ab initio calculations to molecular spectroscopy
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.
1989-01-01
The state of the art in ab initio molecular structure calculations is reviewed, with an emphasis on recent developments such as full configuration-interaction benchmark calculations and atomic natural orbital basis sets. It is shown that new developments in methodology combined with improvements in computer hardware are leading to unprecedented accuracy in solving problems in spectroscopy.
User Guide for GoldSim Model to Calculate PA/CA Doses and Limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, F.
2016-10-31
A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 “Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site”.
Bartella, Lucia; Mazzotti, Fabio; Napoli, Anna; Sindona, Giovanni; Di Donna, Leonardo
2018-03-01
A rapid and reliable method to assay the total amount of tyrosol and hydroxytyrosol derivatives in extra virgin olive oil has been developed. The methodology intends to establish the nutritional quality of this edible oil addressing recent international health claim legislations (the European Commission Regulation No. 432/2012) and changing the classification of extra virgin olive oil to the status of nutraceutical. The method is based on the use of high-performance liquid chromatography coupled with tandem mass spectrometry and labeled internal standards preceded by a fast hydrolysis reaction step performed through the aid of microwaves under acid conditions. The overall process is particularly time saving, much shorter than any methodology previously reported. The developed approach represents a mix of rapidity and accuracy whose values have been found near 100% on different fortified vegetable oils, while the RSD% values, calculated from repeatability and reproducibility experiments, are in all cases under 7%. Graphical abstract Schematic of the methodology applied to the determination of tyrosol and hydroxytyrosol ester conjugates.
NASA Astrophysics Data System (ADS)
Levesque, M.
Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.
Stefania, Gennaro A; Zanotti, Chiara; Bonomi, Tullia; Fumagalli, Letizia; Rotiroti, Marco
2018-05-01
Landfills are one of the most recurrent sources of groundwater contamination worldwide. In order to limit their impacts on groundwater resources, current environmental regulations impose the adoption of proper measures for the protection of groundwater quality. For instance, in the EU member countries, the calculation of trigger levels for identifying significant adverse environmental effects on groundwater generated by landfills is required by the Landfill Directive 99/31/EC. Although the derivation of trigger levels could be relatively easy when groundwater quality data prior to the construction of a landfill are available, it becomes challenging when these data are missing and landfills are located in areas that are already impacted by historical contamination. This work presents a methodology for calculating trigger levels for groundwater quality in landfills located in areas where historical contaminations have deteriorated groundwater quality prior to their construction. This method is based on multivariate statistical analysis and involves 4 steps: (a) implementation of the conceptual model, (b) landfill monitoring data collection, (c) hydrochemical data clustering and (d) calculation of the trigger levels. The proposed methodology was applied on a case study in northern Italy, where a currently used lined landfill is located downstream of an old unlined landfill and others old unmapped waste deposits. The developed conceptual model stated that groundwater quality deterioration observed downstream of the lined landfill is due to a degrading leachate plume fed by the upgradient unlined landfill. The methodology led to the determination of two trigger levels for COD and NH 4 -N, the former for a zone representing the background hydrochemistry (28 and 9 mg/L for COD and NH 4 -N, respectively), the latter for the zone impacted by the degrading leachate plume from the upgradient unlined landfill (89 and 83 mg/L for COD and NH 4 -N, respectively). Copyright © 2018 Elsevier Ltd. All rights reserved.
Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A
2018-04-01
Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA criteria among institutions. © 2018 American Association of Physicists in Medicine.
Buell, Gary R.; Markewich, Helaine W.
2004-01-01
U.S. Geological Survey investigations of environmental controls on carbon cycling in soils and sediments of the Mississippi River Basin (MRB), an area of 3.3 x 106 square kilometers (km2), have produced an assessment tool for estimating the storage and inventory of soil organic carbon (SOC) by using soil-characterization data from Federal, State, academic, and literature sources. The methodology is based on the linkage of site-specific SOC data (pedon data) to the soil-association map units of the U.S. Department of Agriculture State Soil Geographic (STATSGO) and Soil Survey Geographic (SSURGO) digital soil databases in a geographic information system. The collective pedon database assembled from individual sources presently contains 7,321 pedon records representing 2,581 soil series. SOC storage, in kilograms per square meter (kg/m2), is calculated for each pedon at standard depth intervals from 0 to 10, 10 to 20, 20 to 50, and 50 to 100 centimeters. The site-specific storage estimates are then regionalized to produce national-scale (STATSGO) and county-scale (SSURGO) maps of SOC to a specified depth. Based on this methodology, the mean SOC storage for the top meter of mineral soil in the MRB is approximately 10 kg/m2, and the total inventory is approximately 32.3 Pg (1 petagram = 109 metric tons). This inventory is from 2.5 to 3 percent of the estimated global mineral SOC pool.
Determination of Particular Endogenous Fires Hazard Zones in Goaf with Caving of Longwall
NASA Astrophysics Data System (ADS)
Tutak, Magdalena; Brodny, Jaroslaw
2017-12-01
Hazard of endogenous fires is one of the basic and common presented occupational safety hazards in coal mine in Poland and in the world. This hazard means possibility of coal self-ignition as the result of its self-heating process in mining heading or its surrounding. In underground coal-mining during ventilating of operating longwalls takes place migration of parts of airflow to goaf with caving. In a case when in these goaf a coal susceptible to selfignition occurs, then the airflow through these goaf may influence on formation of favourable conditions for coal oxidation and subsequently to its self-heating and self-ignition. Endogenous fire formed in such conditions can pose a serious hazard for the crew and for continuity of operation of mining plant. From the practical point of view, a very significant meaning has determination of the zone in the goaf with caving, in which necessary conditions for occurrence of endogenous fire are fulfilled. In the real conditions determination of such a zone is practically impossible. Therefore, authors of paper developed a methodology of determination of this zone basing on the results of modelling tests. This methodology includes a development of model of tested area, determination of boundary conditions and carrying out the simulation calculations. Based on the obtained results particular hazardous zone of endogenous fire is determined. A base for development of model of investigated region and selection of boundary conditions are the results of real tests. In the paper fundamental assumption of developed methodology, particularly in a range of assumed hazard criterion and sealing coefficient of goaf with caving were discussed. Also a mathematical model of gas flow through the porous media was characterized. Example of determination of a zone particularly endangered by endogenous fire for real system of mining heading in one of the hard coal mine was presented. Longwall ventilated in the „Y” system was subjected to the tests. For determined mining-geological conditions, the critical value of velocity of airflow and oxygen concentration in goaf, conditioning initiation of coal oxidation process were determined. For calculations ANSYS Fluent software based on finite volume method, which enable very precisely to determine the physical and chemical air and parameters at any point of tested mining heading and goaf with caving was used. Such precisely determination of these parameters on the base of the test in real conditions is practically impossible. Obtained results allowed to take early proper actions in order to limit the occurrence of endogenous fire. One can conclude, that presented methodology creates great possibilities of practical application of modelling tests for improvement of the occupational safety state in mine.
Banzato, Tommaso; Fiore, Enrico; Morgante, Massimo; Manuali, Elisabetta; Zotti, Alessandro
2016-10-01
Hepatic lipidosis is the most diffused hepatic disease in the lactating cow. A new methodology to estimate the degree of fatty infiltration of the liver in lactating cows by means of texture analysis of B-mode ultrasound images is proposed. B-mode ultrasonography of the liver was performed in 48 Holstein Friesian cows using standardized ultrasound parameters. Liver biopsies to determine the triacylglycerol content of the liver (TAGqa) were obtained from each animal. A large number of texture parameters were calculated on the ultrasound images by means of a free software. Based on the TAGqa content of the liver, 29 samples were classified as mild (TAGqa<50mg/g), 6 as moderate (50mg/g
NASA Technical Reports Server (NTRS)
Guzman, Melissa
2015-01-01
The primary task for the summer was to procure the GCMS data from the National Space Science Data Coordinated Archive (NSSDCA) and to assess the current state of the data set for possible reanalysis opportunities. After procurement of the Viking GCMS data set and analysis of its current state, the internship focus shifted to preparing a plan for restoral and archiving of the GCMS data set. A proposal was prepared and submitted to NASA Headquarters to restore and make available the 8000 mass chromatographs that are the basic data generated by the Viking GCMS instrument. The relevance of this restoral and the methodology we propose for restoral is presented. The secondary task for the summer is to develop a thermal model for the perceived temperature of a human standing on Mars, Titan, or Europa. Traditionally, an equation called "Fanger's comfort equation" is used to measure the perceived temperature by a human in a given reference environment. However, there are limitations to this model when applied to other planets. Therefore, the approach for this project has been to derive energy balance equations from first principles and then develop a methodology for correlating "comfort" to energy balance. Using the -20 C walk-in freezer in the Space Sciences building at NASA Ames, energy loss of a human subject is measured. Energy loss for a human being on Mars, Titan and Europa are calculated from first principles. These calculations are compared to the freezer measurements, e.g. for 1 minute on Titan, a human loses as much energy as x minutes in a -20 C freezer. This gives a numerical comparison between the environments. These energy calculations are used to consider the physiological comfort of a human based on the calculated energy losses.