Chi, Ching-Chi; Wang, Shu-Hui
2014-01-01
Compared to conventional therapies, biologics are more effective but expensive in treating psoriasis. To evaluate the efficacy and cost-efficacy of biologic therapies for psoriasis. We conducted a meta-analysis to calculate the efficacy of etanercept, adalimumab, infliximab, and ustekinumab for at least 75% reduction in the Psoriasis Area and Severity Index score (PASI 75) and Physician's Global Assessment clear/minimal (PGA 0/1). The cost-efficacy was assessed by calculating the incremental cost-effectiveness ratio (ICER) per subject achieving PASI 75 and PGA 0/1. The incremental efficacy regarding PASI 75 was 55% (95% confidence interval (95% CI) 38%-72%), 63% (95% CI 59%-67%), 71% (95% CI 67%-76%), 67% (95% CI 62%-73%), and 72% (95% CI 68%-75%) for etanercept, adalimumab, infliximab, and ustekinumab 45 mg and 90 mg, respectively. The corresponding 6-month ICER regarding PASI 75 was $32,643 (best case $24,936; worst case $47,246), $21,315 (best case $20,043; worst case $22,760), $27,782 (best case $25,954; worst case $29,440), $25,055 (best case $22,996; worst case $27,075), and $46,630 (best case $44,765; worst case $49,373), respectively. The results regarding PGA 0/1 were similar. Infliximab and ustekinumab 90 mg had the highest efficacy. Meanwhile, adalimumab had the best cost-efficacy, followed by ustekinumab 45 mg and infliximab.
Hop, G E; Mourits, M C M; Oude Lansink, A G J M; Saatkamp, H W
2016-02-01
The cross-border region of the Netherlands (NL) and the two German states of North Rhine Westphalia (NRW) and Lower Saxony (LS) is a large and highly integrated livestock production area. This region increasingly develops towards a single epidemiological area in which disease introduction is a shared veterinary and, consequently, economic risk. The objectives of this study were to examine classical swine fever (CSF) control strategies' veterinary and direct economic impacts for NL, NRW and LS given the current production structure and to analyse CSF's cross-border causes and impacts within the NL-NRW-LS region. The course of the epidemic was simulated by the use of InterSpread Plus, whereas economic analysis was restricted to calculating disease control costs and costs directly resulting from the control measures applied. Three veterinary control strategies were considered: a strategy based on the minimum EU requirements, a vaccination and a depopulation strategy based on NL and GER's contingency plans. Regardless of the veterinary control strategy, simulated outbreak sizes and durations for 2010 were much smaller than those simulated previously, using data from over 10 years ago. For example, worst-case outbreaks (50th percentile) in NL resulted in 30-40 infected farms and lasted for two to four and a half months; associated direct costs and direct consequential costs ranged from €24.7 to 28.6 million and €11.7 to 26.7 million, respectively. Both vaccination and depopulation strategies were efficient in controlling outbreaks, especially large outbreaks, whereas the EU minimum strategy was especially deficient in controlling worst-case outbreaks. Both vaccination and depopulation strategies resulted in low direct costs and direct consequential costs. The probability of cross-border disease spread was relatively low, and cross-border spread resulted in small, short outbreaks in neighbouring countries. Few opportunities for further cross-border harmonization and collaboration were identified, including the implementation of cross-border regions (free and diseased regions regardless of the border) in case of outbreaks within close proximity of the border, and more and quicker sharing of information across the border. It was expected, however, that collaboration to mitigate the market effects of an epidemic will create more opportunities to lower the impact of CSF outbreaks in a cross-border context. © 2014 Blackwell Verlag GmbH.
An SEU resistant 256K SOI SRAM
NASA Astrophysics Data System (ADS)
Hite, L. R.; Lu, H.; Houston, T. W.; Hurta, D. S.; Bailey, W. E.
1992-12-01
A novel SEU (single event upset) resistant SRAM (static random access memory) cell has been implemented in a 256K SOI (silicon on insulator) SRAM that has attractive performance characteristics over the military temperature range of -55 to +125 C. These include worst-case access time of 40 ns with an active power of only 150 mW at 25 MHz, and a worst-case minimum WRITE pulse width of 20 ns. Measured SEU performance gives an Adams 10 percent worst-case error rate of 3.4 x 10 exp -11 errors/bit-day using the CRUP code with a conservative first-upset LET threshold. Modeling does show that higher bipolar gain than that measured on a sample from the SRAM lot would produce a lower error rate. Measurements show the worst-case supply voltage for SEU to be 5.5 V. Analysis has shown this to be primarily caused by the drain voltage dependence of the beta of the SOI parasitic bipolar transistor. Based on this, SEU experiments with SOI devices should include measurements as a function of supply voltage, rather than the traditional 4.5 V, to determine the worst-case condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolan, N.A.; Hansen-Murray, C.S.; Haynes, R.W.
Implications of the interim comprehensive strategy for improved Pacific salmon and steelhead habitat management (PACFISH) were estimated for those Bureau of Land Management (BLM) districts and National Forest System (NFS) lands west of the Rocky Mountains that have anadromous fish. The physical impacts and associated mitigation costs from implementing the PACFISH strategy over the next decade in Pacific Northwest, Intermountain, Northern, Pacific Southwest, and Alaska Region National Forest and BLM district recreation, range, and timber programs were analyzed with the actual current output as the base. Economic considerations were added to evaluate any change in the perceived ranking of severitymore » among the impacts. Two cases were considered in the analyses: a derived worst case, where a total reduction of the actual current output of the programs in anadromous fishbearing drainages occurs (giving a minimum value for the programs in those drainages), and a mitigated case where all or part of the loss is mitigated and the cost of doing so is evaluated with two phases, one without economics and the other with it.« less
SU-E-T-551: PTV Is the Worst-Case of CTV in Photon Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, D; Liu, W; Park, P
2014-06-01
Purpose: To examine the supposition of the static dose cloud and adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution for photon therapy in head and neck (H and N) plans. Methods: Five diverse H and N plans clinically delivered at our institution were selected. Isocenter for each plan was shifted positively and negatively in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (3 mm) for a total of six shifted plans per original plan. The perturbed plan dose was recalculated inmore » Eclipse (AAA v11.0.30) using the same, fixed fluence map as the original plan. The dose distributions for all plans were exported from the treatment planning system to determine the worst-case CTV dose distributions for each nominal plan. Two worst-case distributions, cold and hot, were defined by selecting the minimum or maximum dose per voxel from all the perturbed plans. The resulting dose volume histograms (DVH) were examined to evaluate the worst-case CTV and nominal PTV dose distributions. Results: Inspection demonstrates that the CTV DVH in the nominal dose distribution is indeed bounded by the CTV DVHs in the worst-case dose distributions. Furthermore, comparison of the D95% for the worst-case (cold) CTV and nominal PTV distributions by Pearson's chi-square test shows excellent agreement for all plans. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the five plans under examination. Although the worst-case dose distributions are unphysical since the dose per voxel is chosen independently, the cold worst-case distribution serves as a lower bound for the worst-case possible CTV coverage. Minor discrepancies between the nominal PTV dose distribution and worst-case CTV dose distribution are expected since the dose cloud is not strictly static. This research was supported by the NCI through grant K25CA168984, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, and by the Fraternal Order of Eagles Cancer Research Fund, the Career Development Award Program at Mayo Clinic.« less
Estimated cost of universal public coverage of prescription drugs in Canada
Morgan, Steven G.; Law, Michael; Daw, Jamie R.; Abraham, Liza; Martin, Danielle
2015-01-01
Background: With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. Methods: We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Results: Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. Interpretation: The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. PMID:25780047
Estimated cost of universal public coverage of prescription drugs in Canada.
Morgan, Steven G; Law, Michael; Daw, Jamie R; Abraham, Liza; Martin, Danielle
2015-04-21
With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. © 2015 Canadian Medical Association or its licensors.
Yang, Tsong-Shing; Chi, Ching-Chi; Wang, Shu-Hui; Lin, Jing-Chi; Lin, Ko-Ming
2016-10-01
Biologic therapies are more effective but more costly than conventional therapies in treating psoriatic arthritis. To evaluate the cost-efficacy of etanercept, adalimumab and golimumab therapies in treating active psoriatic arthritis in a Taiwanese setting. We conducted a meta-analysis of randomized placebo-controlled trials to calculate the incremental efficacy of etanercept, adalimumab and golimumab, respectively, in achieving Psoriatic Arthritis Response Criteria (PsARC) and a 20% improvement in the American College of Rheumatology score (ACR20). The base, best, and worst case incremental cost-effectiveness ratios (ICERs) for one subject to achieve PsARC and ACR20 were calculated. The annual ICER per PsARC responder were US$27 047 (best scenario US$16 619; worst scenario US$31 350), US$39 339 (best scenario US$31 846; worst scenario US$53 501) and US$27 085 (best scenario US$22 716; worst scenario US$33 534) for etanercept, adalimumab and golimumab, respectively. The annual ICER per ACR20 responder were US$27 588 (best scenario US$20 900; worst scenario US$41 800), US$39 339 (best scenario US$25 236; worst scenario US$83 595) and US$33 534 (best scenario US$27 616; worst scenario US$44 013) for etanercept, adalimumab and golimumab, respectively. In a Taiwanese setting, etanercept had the lowest annual costs per PsARC and ACR20 responder, while adalimumab had the highest annual costs per PsARC and ACR responder. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*
Kim, Donghwan; Fessler, Jeffrey A.
2017-01-01
This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.
Pak, Raymond W; Moskowitz, Eric J; Bagley, Demetrius H
2009-03-01
For many years, the gold standard in upper urinary tract transitional-cell carcinoma (UT-TCC) management has been nephroureterectomy with excision of the bladder cuff. Advances in endourologic instrumentation have allowed urologists to manage this malignancy. The feasibility and success of conservative measures for UT-TCC have been widely published, but there has not been an objective cost analysis performed to date. Our goal was to examine the direct costs of renal-sparing conservative measures v nephroureterectomy and subsequent chronic kidney disease (CKD) or end-stage renal disease (ESRD). Secondary analysis includes a discussion of survival and quality-of-life issues for both treatment cohorts. Retrospective review of a cohort of patients treated at our institution with renal-sparing ureteroscopic management of UT-TCC who were followed for a minimum of 2 years. The costs per case were based on equipment, anesthesia, surgeon fees, pathologic evaluation fees, and hospital stay. ESRD and CKD costs were estimated based on published reports. From 1996 to 2006, 254 patients were evaluated and treated for UT-TCC at our institution. A cohort of 57 patients was examined who had a minimum follow-up period of 2 years. Renal preservation in our series approached 81%, with cancer-specific survival of 94.7%. Assuming a worst-case scenario of a solitary kidney with recurrences at each follow-up for 5 years v nephroureterectomy and dialysis for the same period, an estimated $252,272 U.S. dollars would be saved. This savings would cover the expenses of five cadaveric renal transplantations. Conservative endoscopic management of UT-TCC in our experience should be the gold standard management for low-grade and superficial-stage disease. From a cost perspective, renal-sparing UT-TCC management is effective in reducing ESRD health care expenses.
Kiatpongsan, Sorapop; Kim, Jane J
2014-01-01
Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15-30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain.
Kiatpongsan, Sorapop; Kim, Jane J.
2014-01-01
Background Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15–30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). Methods and Findings The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. Conclusions This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain. PMID:25198104
NASA Astrophysics Data System (ADS)
Haji Hosseinloo, Ashkan; Turitsyn, Konstantin
2016-04-01
Vibration energy harvesting has been shown as a promising power source for many small-scale applications mainly because of the considerable reduction in the energy consumption of the electronics and scalability issues of the conventional batteries. However, energy harvesters may not be as robust as the conventional batteries and their performance could drastically deteriorate in the presence of uncertainty in their parameters. Hence, study of uncertainty propagation and optimization under uncertainty is essential for proper and robust performance of harvesters in practice. While all studies have focused on expectation optimization, we propose a new and more practical optimization perspective; optimization for the worst-case (minimum) power. We formulate the problem in a generic fashion and as a simple example apply it to a linear piezoelectric energy harvester. We study the effect of parametric uncertainty in its natural frequency, load resistance, and electromechanical coupling coefficient on its worst-case power and then optimize for it under different confidence levels. The results show that there is a significant improvement in the worst-case power of thus designed harvester compared to that of a naively-optimized (deterministically-optimized) harvester.
Maximum and minimum return losses from a passive two-port network terminated with a mismatched load
NASA Technical Reports Server (NTRS)
Otoshi, T. Y.
1993-01-01
This article presents an analytical method for determining the exact distance a load is required to be offset from a passive two-port network to obtain maximum or minimum return losses from the terminated two-port network. Equations are derived in terms of two-port network S-parameters and load reflection coefficient. The equations are useful for predicting worst-case performances of some types of networks that are terminated with offset short-circuit loads.
Friction Stir Weld Restart+Reweld Repair Allowables
NASA Technical Reports Server (NTRS)
Clifton, Andrew
2008-01-01
A friction stir weld (FSW) repair method has been developed and successfully implemented on Al 2195 plate material for the Space Shuttle External Fuel Tank (ET). The method includes restarting the friction stir weld in the termination hole of the original weld followed by two reweld passes. Room temperature and cryogenic temperature mechanical properties exceeded minimum FSW design strength and compared well with the development data. Simulated service test results also compared closely to historical data for initial FSW, confirming no change to the critical flaw size or inspection requirements for the repaired weld. Testing of VPPA fusion/FSW intersection weld specimens exhibited acceptable strength and exceeded the minimum design value. Porosity, when present at the intersection was on the root side toe of the fusion weld, the "worst case" being 0.7 inch long. While such porosity may be removed by sanding, this "worst case" porosity condition was tested "as is" and demonstrated that porosity did not negatively affect the strength of the intersection weld. Large, 15-inch "wide panels" FSW repair welds were tested to demonstrate strength and evaluate residual stresses using photo stress analysis. All results exceeded design minimums, and photo stress analysis showed no significant stress gradients due to the presence of the restart and multi-pass FSW repair weld.
Isolator fragmentation and explosive initiation tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, Peter; Rae, Philip John; Foley, Timothy J.
2016-09-19
Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without a barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX 9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimatesmore » demonstrate that even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less
Isolator fragmentation and explosive initiation tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, Peter; Rae, Philip John; Foley, Timothy J.
2015-09-30
Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimates demonstrate thatmore » even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less
Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien
2016-06-01
A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Lunar Orbit Insertion Targeting and Associated Outbound Mission Design for Lunar Sortie Missions
NASA Technical Reports Server (NTRS)
Condon, Gerald L.
2007-01-01
This report details the Lunar Orbit Insertion (LOI) arrival targeting and associated mission design philosophy for Lunar sortie missions with up to a 7-day surface stay and with global Lunar landing site access. It also documents the assumptions, methodology, and requirements validated by TDS-04-013, Integrated Transit Nominal and Abort Characterization and Sensitivity Study. This report examines the generation of the Lunar arrival parking orbit inclination and Longitude of the Ascending Node (LAN) targets supporting surface missions with global Lunar landing site access. These targets support the Constellation Program requirement for anytime abort (early return) by providing for a minimized worst-case wedge angle [and an associated minimum plane change delta-velocity (V) cost] between the Crew Exploration Vehicle (CEV) and the Lunar Surface Access Module (LSAM) for an LSAM launch anytime during the Lunar surface stay.
A VLSI implementation of DCT using pass transistor technology
NASA Technical Reports Server (NTRS)
Kamath, S.; Lynn, Douglas; Whitaker, Sterling
1992-01-01
A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2012 CFR
2012-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2013 CFR
2013-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2014 CFR
2014-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2010 CFR
2010-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Acute respiratory distress syndrome: frequency, clinical course, and costs of care.
Valta, P; Uusaro, A; Nunes, S; Ruokonen, E; Takala, J
1999-11-01
To define the occurrence rate of acute respiratory distress syndrome (ARDS) using established criteria in a well-defined general patient population, to study the clinical course of ARDS when patients were ventilated using a "lung-protective" strategy, and to define the total costs of care. A 3-yr (1993 through 1995) retrospective descriptive analysis of all patients with ARDS treated in Kuopio University Hospital. Intensive care unit in the university hospital. Fifty-nine patients fulfilled the definition of ARDS: Pao2/Fio2 <200 mm Hg (33.3 kPa) during mechanical ventilation and bilateral infiltrates on chest radiograph. None. With a patient data management system, the day-by-day data of hemodynamics, ventilation, respiratory mechanics, gas exchange, and organ failures were collected during the period that Pao2/Fio2 ratio was <200 mm Hg (33.3 kPa). The frequency of ARDS was 4.9 cases/100,000 inhabitants/yr. Pneumonia and sepsis were the most common causes of ARDS. Mean age was 43+/-2 yrs. At the time of lowest Pao2/Fio2, the nonsurvivors had lower arterial and venous oxygen saturations and higher arterial lactate than survivors, whereas there were no differences between the groups in other parameters. Multiple organ dysfunction preceded the worst oxygenation in both the survivors and nonsurvivors. The intensive care mortality was 37%; hospital mortality and mortality after a minimum 8 months of follow-up was 42%. The most frequent cause of death was multiple organ failure. The effective costs of intensive care per survivor were US $73,000. The outcome of ARDS is unpredictable at the time of onset and also at the time of the worst oxygenation. Keeping the inspiratory pressures low (30-35 cm H2O [2.94 to 3.43 kPa]) reduces the frequency of pneumothorax, and might lower the mortality. Most patients are young, and therefore the costs per saved year of life are low.
Massive yet grossly underestimated global costs of invasive insects
Bradshaw, Corey J. A.; Leroy, Boris; Bellard, Céline; Roiz, David; Albert, Céline; Fournier, Alice; Barbet-Massin, Morgane; Salles, Jean-Michel; Simard, Frédéric; Courchamp, Franck
2016-01-01
Insects have presented human society with some of its greatest development challenges by spreading diseases, consuming crops and damaging infrastructure. Despite the massive human and financial toll of invasive insects, cost estimates of their impacts remain sporadic, spatially incomplete and of questionable quality. Here we compile a comprehensive database of economic costs of invasive insects. Taking all reported goods and service estimates, invasive insects cost a minimum of US$70.0 billion per year globally, while associated health costs exceed US$6.9 billion per year. Total costs rise as the number of estimate increases, although many of the worst costs have already been estimated (especially those related to human health). A lack of dedicated studies, especially for reproducible goods and service estimates, implies gross underestimation of global costs. Global warming as a consequence of climate change, rising human population densities and intensifying international trade will allow these costly insects to spread into new areas, but substantial savings could be achieved by increasing surveillance, containment and public awareness. PMID:27698460
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.
Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-09-18
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System
Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-01-01
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019
Derivation and experimental verification of clock synchronization theory
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
1994-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
Worst case analysis: Earth sensor assembly for the tropical rainfall measuring mission observatory
NASA Technical Reports Server (NTRS)
Conley, Michael P.
1993-01-01
This worst case analysis verifies that the TRMMESA electronic design is capable of maintaining performance requirements when subjected to worst case circuit conditions. The TRMMESA design is a proven heritage design and capable of withstanding the most worst case and adverse of circuit conditions. Changes made to the baseline DMSP design are relatively minor and do not adversely effect the worst case analysis of the TRMMESA electrical design.
The minimum control authority of a system of actuators with applications to Gravity Probe-B
NASA Technical Reports Server (NTRS)
Wiktor, Peter; Debra, Dan
1991-01-01
The forcing capabilities of systems composed of many actuators are analyzed in this paper. Multiactuator systems can generate higher forces in some directions than in others. Techniques are developed to find the force in the weakest direction. This corresponds to the worst-case output and is defined as the 'minimum control authority'. The minimum control authority is a function of three things: the actuator configuration, the actuator controller and the way in which the output of the system is limited. Three output limits are studied: (1) fuel-flow rate, (2) power, and (3) actuator output. The three corresponding actuator controllers are derived. These controllers generate the desired force while minimizing either fuel flow rate, power or actuator output. It is shown that using the optimal controller can substantially increase the minimum control authority. The techniques for calculating the minimum control authority are applied to the Gravity Probe-B spacecraft thruster system. This example shows that the minimum control authority can be used to design the individual actuators, choose actuator configuration, actuator controller, and study redundancy.
Rampersaud, Y Raja; Tso, Peggy; Walker, Kevin R; Lewis, Stephen J; Davey, J Roderick; Mahomed, Nizar N; Coyte, Peter C
2014-02-01
Although total hip arthroplasty (THA) and total knee arthroplasty (TKA) have been widely accepted as highly cost-effective procedures, spine surgery for the treatment of degenerative conditions does not share the same perception among stakeholders. In particular, the sustainability of the outcome and cost-effectiveness following lumbar spinal stenosis (LSS) surgery compared with THA/TKA remain uncertain. The purpose of the study was to estimate the lifetime incremental cost-utility ratios for decompression and decompression with fusion for focal LSS versus THA and TKA for osteoarthritis (OA) from the perspective of the provincial health insurance system (predominantly from the hospital perspective) based on long-term health status data at a median of 5 years after surgical intervention. An incremental cost-utility analysis from a hospital perspective was based on a single-center, retrospective longitudinal matched cohort study of prospectively collected outcomes and retrospectively collected costs. Patients who had undergone primary one- to two-level spinal decompression with or without fusion for focal LSS were compared with a matched cohort of patients who had undergone elective THA or TKA for primary OA. Outcome measures included incremental cost-utility ratio (ICUR) ($/quality adjusted life year [QALY]) determined using perioperative costs (direct and indirect) and Short Form-6D (SF-6D) utility scores converted from the SF-36. Patient outcomes were collected using the SF-36 survey preoperatively and annually for a minimum of 5 years. Utility was modeled over the lifetime and QALYs were determined using the median 5-year health status data. The primary outcome measure, cost per QALY gained, was calculated by estimating the mean incremental lifetime costs and QALYs for each diagnosis group after discounting costs and QALYs at 3%. Sensitivity analyses adjusting for +25% primary and revision surgery cost, +25% revision rate, upper and lower confidence interval utility score, variable inpatient rehabilitation rate for THA/TKA, and discounting at 5% were conducted to determine factors affecting the value of each type of surgery. At a median of 5 years (4-7 years), follow-up and revision surgery data was attained for 85%-FLSS, 80%-THA, and 75%-THA of the cohorts. The 5-year ICURs were $21,702/QALY for THA; $28,595/QALY for TKA; $12,271/QALY for spinal decompression; and $35,897/QALY for spinal decompression with fusion. The estimated lifetime ICURs using the median 5-year follow-up data were $5,682/QALY for THA; $6,489/QALY for TKA; $2,994/QALY for spinal decompression; and $10,806/QALY for spinal decompression with fusion. The overall spine (decompression alone and decompression and fusion) ICUR was $5,617/QALY. The estimated best- and worst-case lifetime ICURs varied from $1,126/QALY for the best-case (spinal decompression) to $39,323/QALY for the worst case (spinal decompression with fusion). Surgical management of primary OA of the spine, hip, and knee results in durable cost-utility ratios that are well below accepted thresholds for cost-effectiveness. Despite a significantly higher revision rate, the overall surgical management of FLSS for those who have failed medical management results in similar median 5-year and lifetime cost-utility compared with those of THA and TKA for the treatment of OA from the limited perspective of a public health insurance system. Copyright © 2014 Elsevier Inc. All rights reserved.
A Didactic Analysis of Functional Queues
ERIC Educational Resources Information Center
Rinderknecht, Christian
2011-01-01
When first introduced to the analysis of algorithms, students are taught how to assess the best and worst cases, whereas the mean and amortized costs are considered advanced topics, usually saved for graduates. When presenting the latter, aggregate analysis is explained first because it is the most intuitive kind of amortized analysis, often…
Minimum Requirements for Taxicab Security Cameras.
Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene
2014-07-01
The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
Increased care demand and medical costs after falls in nursing homes: A Delphi study.
Sterke, Carolyn Shanty; Panneman, Martien J; Erasmus, Vicki; Polinder, Suzanne; van Beeck, Ed F
2018-04-21
To estimate the increased care demand and medical costs caused by falls in nursing homes. There is compelling evidence that falls in nursing homes are preventable. However, proper implementation of evidence-based guidelines to prevent falls is often hindered by insufficient management support, staff time and funding. A three-round Delphi study. A panel of 41 experts, all working in nursing homes in the Netherlands, received three online questionnaires to estimate the extra hours of care needed during the first year after the fall. This was estimated for ten falls categories with different levels of injury severity, in three scenarios, that is a best-case, a typical-case and a worst-case scenario. We calculated the costs of falls by multiplying the mean amount of extra hours that the participants spent on the care for a resident after a fall with their hourly wages. In case of a noninjurious fall, the extra time spent on the faller is on average almost 5 hr, expressed in euros that add to € 193. The extra staff time and costs of falls increased with increasing severity of injury. In the case of a fracture of the lower limb, the extra staff time increased to 132 hr, expressed in euros that is € 4,604. In the worst-case scenario of a fracture of the lower limb, the extra staff time increased to 284 hr, expressed in euros that is € 10,170. Falls in nursing homes result in a great deal of extra staff time spent on care, with extra costs varying between € 193 for a noninjurious fall and € 10,170 for serious falls. This study could aid decision-making on investing in appropriate implementation of falls prevention interventions in nursing homes. © 2018 John Wiley & Sons Ltd.
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
Time Safety Margin: Theory and Practice
2016-09-01
Basic Dive Recovery Terminology The Simplest Definition of TSM: Time Safety Margin is the time to directly travel from the worst-case vector to an...Safety Margin (TSM). TSM is defined as the time in seconds to directly travel from the worst case vector (i.e. worst case combination of parameters...invoked by this AFI, base recovery planning and risk management upon the calculated TSM. TSM is the time in seconds to di- rectly travel from the worst case
Minimum Requirements for Taxicab Security Cameras*
Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene
2015-01-01
Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
Child Labour Remains "Massive Problem."
ERIC Educational Resources Information Center
World of Work, 2002
2002-01-01
Despite significant progress in efforts to abolish child labor, an alarming number of children are engaged in its worst forms. Although 106 million are engaged in acceptable labor (light work for those above the minimum age for employment), 246 million are involved in child labor that should be abolished (under minimum age, hazardous work). (JOW)
Efficiency analysis of diffusion on T-fractals in the sense of random walks.
Peng, Junhao; Xu, Guoai
2014-04-07
Efficiently controlling the diffusion process is crucial in the study of diffusion problem in complex systems. In the sense of random walks with a single trap, mean trapping time (MTT) and mean diffusing time (MDT) are good measures of trapping efficiency and diffusion efficiency, respectively. They both vary with the location of the node. In this paper, we analyze the effects of node's location on trapping efficiency and diffusion efficiency of T-fractals measured by MTT and MDT. First, we provide methods to calculate the MTT for any target node and the MDT for any source node of T-fractals. The methods can also be used to calculate the mean first-passage time between any pair of nodes. Then, using the MTT and the MDT as the measure of trapping efficiency and diffusion efficiency, respectively, we compare the trapping efficiency and diffusion efficiency among all nodes of T-fractal and find the best (or worst) trapping sites and the best (or worst) diffusing sites. Our results show that the hub node of T-fractal is the best trapping site, but it is also the worst diffusing site; and that the three boundary nodes are the worst trapping sites, but they are also the best diffusing sites. Comparing the maximum of MTT and MDT with their minimums, we find that the maximum of MTT is almost 6 times of the minimum of MTT and the maximum of MDT is almost equal to the minimum for MDT. Thus, the location of target node has large effect on the trapping efficiency, but the location of source node almost has no effect on diffusion efficiency. We also simulate random walks on T-fractals, whose results are consistent with the derived results.
Kameda, Tatsuya; Inukai, Keigo; Higuchi, Satomi; Ogawa, Akitoshi; Kim, Hackjin; Matsuda, Tetsuya; Sakagami, Masamichi
2016-01-01
Distributive justice concerns the moral principles by which we seek to allocate resources fairly among diverse members of a society. Although the concept of fair allocation is one of the fundamental building blocks for societies, there is no clear consensus on how to achieve “socially just” allocations. Here, we examine neurocognitive commonalities of distributive judgments and risky decisions. We explore the hypothesis that people’s allocation decisions for others are closely related to economic decisions for oneself at behavioral, cognitive, and neural levels, via a concern about the minimum, worst-off position. In a series of experiments using attention-monitoring and brain-imaging techniques, we investigated this “maximin” concern (maximizing the minimum possible payoff) via responses in two seemingly disparate tasks: third-party distribution of rewards for others, and choosing gambles for self. The experiments revealed three robust results: (i) participants’ distributive choices closely matched their risk preferences—“Rawlsians,” who maximized the worst-off position in distributions for others, avoided riskier gambles for themselves, whereas “utilitarians,” who favored the largest-total distributions, preferred riskier but more profitable gambles; (ii) across such individual choice preferences, however, participants generally showed the greatest spontaneous attention to information about the worst possible outcomes in both tasks; and (iii) this robust concern about the minimum outcomes was correlated with activation of the right temporoparietal junction (RTPJ), the region associated with perspective taking. The results provide convergent evidence that social distribution for others is psychologically linked to risky decision making for self, drawing on common cognitive–neural processes with spontaneous perspective taking of the worst-off position. PMID:27688764
Analysis of Separation Corridors for Visiting Vehicles from the International Space Station
NASA Technical Reports Server (NTRS)
Zaczek, Mariusz P.; Schrock, Rita R.; Schrock, Mark B.; Lowman, Bryan C.
2011-01-01
The International Space Station (ISS) is a very dynamic vehicle with many operational constraints that affect its performance, operations, and vehicle lifetime. Most constraints are designed to alleviate various safety concerns that are a result of dynamic activities between the ISS and various Visiting Vehicles (VVs). One such constraint that has been in place for Russian Vehicle (RV) operations is the limitation placed on Solar Array (SA) positioning in order to prevent collisions during separation and subsequent relative motion of VVs. An unintended consequence of the SA constraint has been the impacts to the operational flexibility of the ISS resulting from the reduced power generation capability as well as from a reduction in the operational lifetime of various SA components. The purpose of this paper is to discuss the technique and the analysis that were applied in order to relax the SA constraints for RV undockings, thereby improving both the ISS operational flexibility and extending its lifetime for many years to come. This analysis focused on the effects of the dynamic motion that occur both prior to and following RV separations. The analysis involved a parametric approach in the conservative application of various initial conditions and assumptions. These included the use of the worst case minimum and maximum vehicle configurations, worst case initial attitudes and attitude rates, and the worst case docking port separation dynamics. Separations were calculated for multiple ISS docking ports, at varied deviations from the nominal undocking attitudes and included the use of two separate attitude control schemes: continuous free-drift and a post separation attitude hold. The analysis required numerical propagation of both the separation motion and the vehicle attitudes using 3-degree-of-freedom (DOF) relative motion equations coupled with rigid body rotational dynamics to generate a large set of separation trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holdren, J.P.
The need for fusion energy depends strongly on fusion's potential to achieve ambitious safety goals more completely or more economically than fission can. The history and present complexion of public opinion about environment and safety gives little basis for expecting either that these concerns will prove to be a passing fad or that the public will make demands for zero risk that no energy source can meet. Hazard indices based on ''worst case'' accidents and exposures should be used as design tools to promote combinations of fusion-reactor materials and configurations that bring the worst cases down to levels small comparedmore » to the hazards people tolerate from electricity at the point of end use. It may well be possible, by building such safety into fusion from the ground up, to accomplish this goal at costs competitive with other inexhaustible electricity sources. Indeed, the still rising and ultimately indeterminate costs of meeting safety and environmental requirements in nonbreeder fission reactors and coal-burning power plants mean that fusion reactors meeting ambitious safety goals may be able to compete economically with these ''interim'' electricity sources as well.« less
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
Fracture mechanics technology for optimum pressure vessel design.
NASA Technical Reports Server (NTRS)
Bjeletich, J. G.; Morton, T. M.
1973-01-01
A technique has been developed to design a maximum efficiency reliable pressure vessel of given geometry and service life. The technique for ensuring reliability of the minimum weight vessel relies on the application of linear elastic fracture mechanics and fracture mechanics concepts. The resultant design incorporates potential fatigue and stress corrosion crack extension during service of a worst case initial flaw. Maximum stress for safe life is specified by the design technique, thereby minimizing weight. Ratios of pressure and toughness parameters are employed to avoid arbitrary specification of design stress level which would lead to a suboptimum design.
Specifying design conservatism: Worst case versus probabilistic analysis
NASA Technical Reports Server (NTRS)
Miles, Ralph F., Jr.
1993-01-01
Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 2 2012-07-01 2012-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
30 CFR 253.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2010 CFR
2010-07-01
...: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000 bbls but not more than... must demonstrate OSFR in accordance with the following table: COF worst case oil-spill discharge volume... applicable table in paragraph (b)(1) or (b)(2) for a facility with a potential worst case oil-spill discharge...
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 2 2013-07-01 2013-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 2 2014-07-01 2014-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
Great Plains Project: at worst a $1. 7 billion squeeze
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maize, K.
1983-04-11
On January 29, 1982, seeking a loan guarantee for its coal-to-gas synfuels project, Great Plains Gasification Associates told the Department of Energy that they expected to reap $1.2 billion in net income to the partnership during the first 10 years of the venture. On March 31, 1983, Great Plains treasurer Rodney Boulanger had a different projection: a horrific loss of $773 million in the first decade. The Great Plains project, with construction 50% complete, is being built near Beulah, ND. The project has a design capacity of 137.5 million cubic feet a day of SNG. Great Plains' analysis assumes thatmore » the plant will operate at 70% of design capacity in 1985, 77% in 1986, 84% in 1987 and 91% thereafter. The company projects the total project cost at $2.1 billion, consisting of plant costs of $1.9 billion and coal mine costs of $156 million. In originally projecting a cumulative net income of better than $1 billion, the partners anticipated running losses in only three of the first 10 years, and cash distributions from the project of $893 million during the first decade. Under the new projections, even in the best case, the first four years would show losses and there would be no distribution to the partners. In the worst case, the project would run in the red every year for the first 10 years.« less
The economic burden of meningitis to households in Kassena-Nankana district of Northern Ghana.
Akweongo, Patricia; Dalaba, Maxwell A; Hayden, Mary H; Awine, Timothy; Nyaaba, Gertrude N; Anaseba, Dominic; Hodgson, Abraham; Forgor, Abdulai A; Pandya, Rajul
2013-01-01
To estimate the direct and indirect costs of meningitis to households in the Kassena-Nankana District of Ghana. A Cost of illness (COI) survey was conducted between 2010 and 2011. The COI was computed from a retrospective review of 80 meningitis cases answers to questions about direct medical costs, direct non-medical costs incurred and productivity losses due to recent meningitis incident. The average direct and indirect costs of treating meningitis in the district was GH¢152.55 (US$101.7) per household. This is equivalent to about two months minimum wage earned by Ghanaians in unskilled paid jobs in 2009. Households lost 29 days of work per meningitis case and thus those in minimum wage paid jobs lost a monthly minimum wage of GH¢76.85 (US$51.23) due to the illness. Patients who were insured spent an average of GH¢38.5 (US$25.67) in direct medical costs whiles the uninsured patients spent as much as GH¢177.9 (US$118.6) per case. Patients with sequelae incurred additional costs of GH¢22.63 (US$15.08) per case. The least poor were more exposed to meningitis than the poorest. Meningitis is a debilitating but preventable disease that affects people living in the Sahel and in poorer conditions. The cost of meningitis treatment may further lead to impoverishment for these households. Widespread mass vaccination will save households' an equivalent of GH¢175.18 (US$117) and impairment due to meningitis.
Beyond Worst-Case Analysis in Privacy and Clustering: Exploiting Explicit and Implicit Assumptions
2013-08-01
Dwork et al [63]. Given a query function f , the curator first estimates the global sensitivity of f , denoted GS(f) = maxD,D′ f(D)− f(D′), then outputs f...Ostrovsky et al [121]. Ostrovsky et al study instances in which the ratio between the cost of the optimal (k − 1)-means solu- tion and the cost of the...k-median objective. We also build on the work of Balcan et al [25] that investigate the connection between point-wise approximations of the target
30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 2 2011-07-01 2011-07-01 false How do I determine the worst case oil-spill... ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To...
30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How do I determine the worst case oil-spill... INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate the amount...
Lower bound for LCD image quality
NASA Astrophysics Data System (ADS)
Olson, William P.; Balram, Nikhil
1996-03-01
The paper presents an objective lower bound for the discrimination of patterns and fine detail in images on a monochrome LCD. In applications such as medical imaging and military avionics the information of interest is often at the highest frequencies in the image. Since LCDs are sampled data systems, their output modulation is dependent on the phase between the input signal and the sampling points. This phase dependence becomes particularly significant at high spatial frequencies. In order to use an LCD for applications such as those mentioned above it is essential to have a lower (worst case) bound on the performance of the display. We address this problem by providing a mathematical model for the worst case output modulation of an LCD in response to a sine wave input. This function can be interpreted as a worst case modulation transfer function (MTF). The intersection of the worst case MTF with the contrast threshold function (CTF) of the human visual system defines the highest spatial frequency that will always be detectable. In addition to providing the worst case limiting resolution, this MTF is combined with the CTF to produce objective worst case image quality values using the modulation transfer function area (MTFA) metric.
Probabilistic Solar Energetic Particle Models
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.
2011-01-01
To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.
Pearle, Andrew D; van der List, Jelle P; Lee, Lily; Coon, Thomas M; Borus, Todd A; Roche, Martin W
2017-03-01
Successful clinical outcomes following unicompartmental knee arthroplasty (UKA) depend on lower limb alignment, soft tissue balance and component positioning, which can be difficult to control using manual instrumentation. Although robotic-assisted surgery more reliably controls these surgical factors, studies assessing outcomes of robotic-assisted UKA are lacking. Therefore, a prospective multicenter study was performed to assess outcomes of robotic-assisted UKA. A total of 1007 consecutive patients (1135 knees) underwent robotic-assisted medial UKA surgery from six surgeons at separate institutions between March 2009 and December 2011. All patients received a fixed-bearing metal-backed onlay implant as tibial component. Each patient was contacted at minimum two-year follow-up and asked a series of five questions to determine survivorship and patient satisfaction. Worst-case scenario analysis was performed whereby all patients were considered as revision when they declined participation in the study. Data was collected for 797 patients (909 knees) with average follow-up of 29.6months (range: 22-52months). At 2.5-years of follow-up, 11 knees were reported as revised, which resulted in a survivorship of 98.8%. Thirty-five patients declined participation in the study yielding a worst-case survivorship of 96.0%. Of all patients without revision, 92% was either very satisfied or satisfied with their knee function. In this multicenter study, robotic-assisted UKA was found to have high survivorship and satisfaction rate at short-term follow-up. Prospective comparison studies with longer follow-up are necessary in order to compare survivorship and satisfaction rates of robotic-assisted UKA to conventional UKA and total knee arthroplasty. Copyright © 2016 Elsevier B.V. All rights reserved.
Zhu, Zhengfei; Liu, Wei; Gillin, Michael; Gomez, Daniel R; Komaki, Ritsuko; Cox, James D; Mohan, Radhe; Chang, Joe Y
2014-05-06
We assessed the robustness of passive scattering proton therapy (PSPT) plans for patients in a phase II trial of PSPT for stage III non-small cell lung cancer (NSCLC) by using the worst-case scenario method, and compared the worst-case dose distributions with the appearance of locally recurrent lesions. Worst-case dose distributions were generated for each of 9 patients who experienced recurrence after concurrent chemotherapy and PSPT to 74 Gy(RBE) for stage III NSCLC by simulating and incorporating uncertainties associated with set-up, respiration-induced organ motion, and proton range in the planning process. The worst-case CT scans were then fused with the positron emission tomography (PET) scans to locate the recurrence. Although the volumes enclosed by the prescription isodose lines in the worst-case dose distributions were consistently smaller than enclosed volumes in the nominal plans, the target dose coverage was not significantly affected: only one patient had a recurrence outside the prescription isodose lines in the worst-case plan. PSPT is a relatively robust technique. Local recurrence was not associated with target underdosage resulting from estimated uncertainties in 8 of 9 cases.
Occulting Light Concentrators in Liquid Scintillator Neutrino Detectors
NASA Astrophysics Data System (ADS)
Buizza Avanzini, Margherita; Cabrera, Anatael; Dusini, Stefano; Grassi, Marco; He, Miao; Wu, Wenjie
2017-09-01
The experimental efforts characterizing the era of precision neutrino physics revolve around collecting high-statistics neutrino samples and attaining an excellent energy and position resolution. Next generation liquid-based neutrino detectors, such as JUNO, HyperKamiokande, etc, share the use of a large target mass, and the need of pushing light collection to the edge for maximal calorimetric information. Achieving high light collection implies considerable costs, especially when considering detector masses of several kt. A traditional strategy to maximize the effective photo-coverage with the minimum number of PMTs relies on Light Concentrators (LC), such as Winston Cones. In this paper, the authors introduce a novel concept called Occulting Light Concentrators (OLC), whereby a traditional LC gets tailored to a conventional PMT, by taking into account its single-photoelectron collection efficiency profile and thus occulting the worst performing portion of the photocathode. Thus, the OLC shape optimization takes into account not only the optical interface of the PMT, but also the maximization of the PMT detection performances. The light collection uniformity across the detector is another advantage of the OLC system. By considering the case of JUNO, we will show OLC capabilities in terms of light collection and energy resolution.
An Air Revitalization Model (ARM) for Regenerative Life Support Systems (RLSS)
NASA Technical Reports Server (NTRS)
Hart, Maxwell M.
1990-01-01
The primary objective of the air revitalization model (ARM) is to determine the minimum buffer capacities that would be necessary for long duration space missions. Several observations are supported by the current configuration sizes: the baseline values for each gas and the day to day or month to month fluctuations that are allowed. The baseline values depend on the minimum safety tolerances and the quantities of life support consumables necessary to survive the worst case scenarios within those tolerances. Most, it not all, of these quantities can easily be determined by ARM once these tolerances are set. The day to day fluctuations also require a command decision. It is already apparent from the current configuration of ARM that the tighter these fluctuations are controlled, the more energy used, the more nonregenerable hydrazine consumed, and the larger the required capacities for the various gas generators. All of these relationships could clearly be quantified by one operational ARM.
Fine-Scale Structure Design for 3D Printing
NASA Astrophysics Data System (ADS)
Panetta, Francis Julian
Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.
Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty
1997-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
R&M (Reliability and Maintainability) Program Cost Drivers.
1987-05-01
Specific data points used to develop the models (i.e., labor hours mid associated systems and task application characteristics) were obtained from three...study data base used to generate the CER’s em be expanded by adding project data points to the input data given in Appendix 13, adjusting the CER...FRACAS, worst-case/ thermal analyses, stress screening and R-growth. However, the studies did not assign benefits to specific task areas. c. Task
Flying After Conducting an Aircraft Excessive Cabin Leakage Test.
Houston, Stephen; Wilkinson, Elizabeth
2016-09-01
Aviation medical specialists should be aware that commercial airline aircraft engineers may undertake a 'dive equivalent' operation while conducting maintenance activities on the ground. We present a worked example of an occupational risk assessment to determine a minimum safe preflight surface interval (PFSI) for an engineer before flying home to base after conducting an Excessive Cabin Leakage Test (ECLT) on an unserviceable aircraft overseas. We use published dive tables to determine the minimum safe PFSI. The estimated maximum depth acquired during the procedure varies between 10 and 20 fsw and the typical estimated bottom time varies between 26 and 53 min for the aircraft types operated by the airline. Published dive tables suggest that no minimum PFSI is required for such a dive profile. Diving tables suggest that no minimum PFSI is required for the typical ECLT dive profile within the airline; however, having conducted a risk assessment, which considered peak altitude exposure during commercial flight, the worst-case scenario test dive profile, the variability of interindividual inert gas retention, and our existing policy among other occupational groups within the airline, we advised that, in the absence of a bespoke assessment of the particular circumstances on the day, the minimum PFSI after conducting ECLT should be 24 h. Houston S, Wilkinson E. Flying after conducting an aircraft excessive cabin leakage test. Aerosp Med Hum Perform. 2016; 87(9):816-820.
On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI.
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2017-06-21
The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.
On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI
NASA Astrophysics Data System (ADS)
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2017-06-01
The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.
2012-01-01
Background Five diseases are currently screened on dried blood spots in France through the national newborn screening programme. Tandem mass spectrometry (MS/MS) is a technology that is increasingly used to screen newborns for an increasing number of hereditary metabolic diseases. Medium chain acyl-CoA dehydrogenase deficiency (MCADD) is among these diseases. We sought to evaluate the cost-effectiveness of introducing MCADD screening in France. Methods We developed a decision model to evaluate, from a societal perspective and a lifetime horizon, the cost-effectiveness of expanding the French newborn screening programme to include MCADD. Published and, where available, routine data sources were used. Both costs and health consequences were discounted at an annual rate of 4%. The model was applied to a French birth cohort. One-way sensitivity analyses and worst-case scenario simulation were performed. Results We estimate that MCADD newborn screening in France would prevent each year five deaths and the occurrence of neurological sequelae in two children under 5 years, resulting in a gain of 128 life years or 138 quality-adjusted life years (QALY). The incremental cost per year is estimated at €2.5 million, down to €1 million if this expansion is combined with a replacement of the technology currently used for phenylketonuria screening by MS/MS. The resulting incremental cost-effectiveness ratio (ICER) is estimated at €7 580/QALY. Sensitivity analyses indicate that while the results are robust to variations in the parameters, the model is most sensitive to the cost of neurological sequelae, MCADD prevalence, screening effectiveness and screening test cost. The worst-case scenario suggests an ICER of €72 000/QALY gained. Conclusions Although France has not defined any threshold for judging whether the implementation of a health intervention is an efficient allocation of public resources, we conclude that the expansion of the French newborn screening programme to MCADD would appear to be cost-effective. The results of this analysis have been used to produce recommendations for the introduction of universal newborn screening for MCADD in France. PMID:22681855
Hamers, Françoise F; Rumeau-Pichon, Catherine
2012-06-08
Five diseases are currently screened on dried blood spots in France through the national newborn screening programme. Tandem mass spectrometry (MS/MS) is a technology that is increasingly used to screen newborns for an increasing number of hereditary metabolic diseases. Medium chain acyl-CoA dehydrogenase deficiency (MCADD) is among these diseases. We sought to evaluate the cost-effectiveness of introducing MCADD screening in France. We developed a decision model to evaluate, from a societal perspective and a lifetime horizon, the cost-effectiveness of expanding the French newborn screening programme to include MCADD. Published and, where available, routine data sources were used. Both costs and health consequences were discounted at an annual rate of 4%. The model was applied to a French birth cohort. One-way sensitivity analyses and worst-case scenario simulation were performed. We estimate that MCADD newborn screening in France would prevent each year five deaths and the occurrence of neurological sequelae in two children under 5 years, resulting in a gain of 128 life years or 138 quality-adjusted life years (QALY). The incremental cost per year is estimated at €2.5 million, down to €1 million if this expansion is combined with a replacement of the technology currently used for phenylketonuria screening by MS/MS. The resulting incremental cost-effectiveness ratio (ICER) is estimated at €7 580/QALY. Sensitivity analyses indicate that while the results are robust to variations in the parameters, the model is most sensitive to the cost of neurological sequelae, MCADD prevalence, screening effectiveness and screening test cost. The worst-case scenario suggests an ICER of €72 000/QALY gained. Although France has not defined any threshold for judging whether the implementation of a health intervention is an efficient allocation of public resources, we conclude that the expansion of the French newborn screening programme to MCADD would appear to be cost-effective. The results of this analysis have been used to produce recommendations for the introduction of universal newborn screening for MCADD in France.
Code of Federal Regulations, 2012 CFR
2012-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Code of Federal Regulations, 2014 CFR
2014-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Code of Federal Regulations, 2013 CFR
2013-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.
Johnson, Miriam J; Kanaan, Mona; Richardson, Gerry; Nabb, Samantha; Torgerson, David; English, Anne; Barton, Rachael; Booth, Sara
2015-09-07
About 90 % of patients with intra-thoracic malignancy experience breathlessness. Breathing training is helpful, but it is unknown whether repeated sessions are needed. The present study aims to test whether three sessions are better than one for breathlessness in this population. This is a multi-centre randomised controlled non-blinded parallel arm trial. Participants were allocated to three sessions or single (1:2 ratio) using central computer-generated block randomisation by an independent Trials Unit and stratified for centre. The setting was respiratory, oncology or palliative care clinics at eight UK centres. Inclusion criteria were people with intrathoracic cancer and refractory breathlessness, expected prognosis ≥3 months, and no prior experience of breathing training. The trial intervention was a complex breathlessness intervention (breathing training, anxiety management, relaxation, pacing, and prioritisation) delivered over three hour-long sessions at weekly intervals, or during a single hour-long session. The main primary outcome was worst breathlessness over the previous 24 hours ('worst'), by numerical rating scale (0 = none; 10 = worst imaginable). Our primary analysis was area under the curve (AUC) 'worst' from baseline to 4 weeks. All analyses were by intention to treat. Between April 2011 and October 2013, 156 consenting participants were randomised (52 three; 104 single). Overall, the 'worst' score reduced from 6.81 (SD, 1.89) to 5.84 (2.39). Primary analysis [n = 124 (79 %)], showed no between-arm difference in the AUC: three sessions 22.86 (7.12) vs single session 22.58 (7.10); P value = 0.83); mean difference 0.2, 95 % CIs (-2.31 to 2.97). Complete case analysis showed a non-significant reduction in QALYs with three sessions (mean difference -0.006, 95 % CIs -0.018 to 0.006). Sensitivity analyses found similar results. The probability of the single session being cost-effective (threshold value of £20,000 per QALY) was over 80 %. There was no evidence that three sessions conferred additional benefits, including cost-effectiveness, over one. A single session of breathing training seems appropriate and minimises patient burden. Registry: ISRCTN; ISRCTN49387307; http://www.isrctn.com/ISRCTN49387307 ; registration date: 25/01/2011.
Existential Risk and Cost-Effective Biosecurity
Snyder-Beattie, Andrew
2017-01-01
In the decades to come, advanced bioweapons could threaten human existence. Although the probability of human extinction from bioweapons may be low, the expected value of reducing the risk could still be large, since such risks jeopardize the existence of all future generations. We provide an overview of biotechnological extinction risk, make some rough initial estimates for how severe the risks might be, and compare the cost-effectiveness of reducing these extinction-level risks with existing biosecurity work. We find that reducing human extinction risk can be more cost-effective than reducing smaller-scale risks, even when using conservative estimates. This suggests that the risks are not low enough to ignore and that more ought to be done to prevent the worst-case scenarios. PMID:28806130
The cost effectiveness of intracyctoplasmic sperm injection (ICSI).
Hollingsworth, Bruce; Harris, Anthony; Mortimer, Duncan
2007-12-01
To estimate the incremental cost effectiveness of ICSI, and total costs for the population of Australia. Treatment effects for three patient groups were drawn from a published systematic review and meta-analysis of trials comparing fertilisation outcomes for ICSI. Incremental costs derived from resource-based costing of ICSI and existing practice comparators for each patient group. Incremental cost per live birth for patients unsuited to IVF is estimated between A$8,500 and 13,400. For the subnormal semen indication, cost per live birth could be as low as A$3,600, but in the worst case scenario, there would just be additional incremental costs of A$600 per procedure. Multiplying out the additional costs of ICSI over the relevant target populations in Australia gives potential total financial implications of over A$31 million per annum. While there are additional benefits from ICSI procedure, particularly for those with subnormal sperm, the additional cost for the health care system is substantial.
Incidence of whooping cough in Spain (1997-2010): an underreported disease.
Fernández-Cano, María Isabel; Armadans Gil, Lluís; Martínez Gómez, Xavi; Campins Martí, Magda
2014-06-01
Whooping cough is currently the worst controlled vaccine-preventable disease in the majority of countries. In order to reduce its morbidity and mortality, it is essential to adapt vaccination programmes to data provided by epidemiological surveillance. A population-based retrospective epidemiological study to estimate the minimum annual undernotification rate of pertussis in Spain from 1997 to 2010 was performed. The incidence of pertussis cases reported to the National Notifiable Disease Surveillance System was compared with the incidence of hospital discharges for pertussis from the National Surveillance System for hospital data, Conjunto Mínimo Básico de Datos. The overall reported incidence and that of hospitalisation for whooping cough were 1.3 cases × 100,000 inhabitants in both cases. Minimum underreporting oscillated between 3.8 and 22.8 %, according to the year of the study. The greatest underreporting (50 %) was observed in children under the age of 1 year. Spanish epidemiological surveillance system of pertussis should be improved with complementary active systems to ascertain the real incidence. Paediatricians and general practitioners should be sensibilized to the importance of notification because this would be essential for adapting the prevention and control measures of this disease.
30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.
Code of Federal Regulations, 2011 CFR
2011-07-01
... associated with the facility. In determining the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your...) For exploratory or development drilling operations, the size of your worst case discharge scenario is...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Wang, X; Li, H
Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less
Evaluation of Separation Mechanism Design for the Orion/Ares Launch Vehicle
NASA Technical Reports Server (NTRS)
Konno, Kevin E.; Catalano, Daniel A.; Krivanek, Thomas M.
2008-01-01
As a part of the preliminary design work being performed for the Orion vehicle, the Orion to Spacecraft Adaptor (SA) separation mechanism was analyzed and sized, with findings presented here. Sizing is based on worst case abort condition as a result of an anomaly driving the launch vehicle engine thrust vector control hard-over causing a severe vehicle pitch over. This worst case scenario occurs just before Upper Stage Main Engine Cut-Off (MECO) when the vehicle is the lightest and the damping effect due to propellant slosh has been reduced to a minimum. To address this scenario and others, two modeling approaches were invoked. The first approach was a detailed 2-D (Simulink) model to quickly assess the Service Module Engine nozzle to SA clearance for a given separation mechanism. The second approach involved the generation of an Automatic Dynamic Analysis of Mechanical Systems (ADAMS) model to assess secondary effects due to mass centers of gravity that were slightly off the vehicle centerline. It also captured any interference between the Solar Arrays and the Spacecraft Adapter. A comparison of modeling results and accuracy are discussed. Most notably, incorporating a larger SA flange diameter allowed for a natural separation of the Orion and its engine nozzle even at relatively large pitch rates minimizing the kickoff force. Advantages and disadvantages of the 2-D model vs. a full 3-D (ADAMS) model are discussed as well.
Evaluation of Separation Mechanism Design for the Orion/Ares Launch Vehicle
NASA Technical Reports Server (NTRS)
Konno, Kevin E.; Catalano, Daniel A.; Krivanek, Thomas M.
2008-01-01
As a part of the preliminary design work being performed for the Orion vehicle, the Orion to Spacecraft Adaptor (SA) separation mechanism mechanism was analyzed and sized, with findings presented here. Sizing is based on worst case abort condition as a result of an anomaly driving the launch vehicle engine thrust vector control hard-over causing a severe vehicle pitch over. This worst case scenario occurs just before Upper Stage Main Engine Cut-Off (MECO) when the vehicle is the lightest and the damping effect due to propellant slosh has been reduced to a minimum. To address this scenario and others, two modeling approaches were invoked. The first approach was a detailed Simulink model to quickly assess the Service Module Engine nozzle to SA clearance for a given separation mechanism. The second approach involved the generation of an Automatic Dynamic Analysis of Mechanical Systems (ADAMS) model to assess secondary effects due to mass centers of gravity that were slightly off the vehicle centerline. It also captured any interference between the Solar Arrays and the Spacecraft Adapter. A comparison of modeling results and accuracy are discussed. Most notably, incorporating a larger SA flange diameter allowed for a natural separation of the Orion and its engine nozzle even at relatively large pitch rates minimizing the kickoff force. Advantages and disadvantages of the Simulink model vs. a full geometric ADAMS model are discussed as well.
Evaluation of Separation Mechanism Design for the Orion/Ares Launch Vehicle
NASA Technical Reports Server (NTRS)
Konno, Kevin E.; Catalano, Daniel A.; Krivanek, Thomas M.
2008-01-01
As a part of the preliminary design work being performed for the Orion vehicle, the Orion to Spacecraft Adaptor (SA) separation mechanism was analyzed and sized, with findings presented here. Sizing is based on worst case abort condition as a result of an anomaly driving the launch vehicle engine thrust vector control hard-over causing a severe vehicle pitch over. This worst-case scenario occurs just before Upper Stage Main Engine Cut-Off when the vehicle is the lightest and the damping effect due to propellant slosh has been reduced to a minimum. To address this scenario and others, two modeling approaches were invoked. The first approach was a detailed Simulink model to quickly assess the Service Module Engine nozzle to SA clearance for a given separation mechanism. The second approach involved the generation of an Automatic Dynamic Analysis of Mechanical Systems (ADAMS) model to assess secondary effects due to mass centers of gravity that were slightly off the vehicle centerline. It also captured any interference between the Solar Arrays and the Spacecraft Adapter. A comparison of modeling results and accuracy are discussed. Most notably, incorporating a larger SA flange diameter allowed for a natural separation of the Orion and its engine nozzle even at relatively large pitch rates minimizing the kickoff force. Advantages and disadvantages of the Simulink model vs. a full geometric ADAMS model are discussed as well.
Evaluation of Separation Mechanism Design for the Orion/Ares Launch Vehicle
NASA Technical Reports Server (NTRS)
Konno, Kevin E.; Catalano, Daniel A.; Krivanek, Thomas M.
2008-01-01
As a part of the preliminary design work being performed for the Orion vehicle, the Orion to Spacecraft Adaptor (SA) separation mechanism was analyzed and sized, with findings presented here. Sizing is based on worst case abort condition as a result of an anomaly driving the launch vehicle engine thrust vector control hard-over causing a severe vehicle pitch over. This worst case scenario occurs just before Upper Stage Main Engine Cut-Off (MECO) when the vehicle is the lightest and the damping effect due to propellant slosh has been reduced to a minimum. To address this scenario and others, two modeling approaches were invoked. The first approach was a detailed Simulink model to quickly assess the Service Module Engine nozzle to SA clearance for a given separation mechanism. The second approach involved the generation of an Automatic Dynamic Analysis of Mechanical Systems (ADAMS) model to assess secondary effects due to mass centers of gravity that were slightly off the vehicle centerline. It also captured any interference between the Solar Arrays and the Spacecraft Adapter. A comparison of modeling results and accuracy are discussed. Most notably, incorporating a larger SA flange diameter allowed for a natural separation of the Orion and it's engine nozzle even at relatively large pitch rates minimizing the kickoff force. Advantages and disadvantages of the Simulink model vs. a full geometric ADAMS model are discussed as well.
Vonkeman, Harald E; Braakman-Jansen, Louise M A; Klok, Rogier M; Postma, Maarten J; Brouwers, Jacobus R B J; van de Laar, Mart A F J
2008-01-01
We estimated the cost effectiveness of concomitant proton pump inhibitors (PPIs) in relation to the occurrence of non-steroidal anti-inflammatory drug (NSAID) ulcer complications. This study was linked to a nested case-control study. Patients with NSAID ulcer complications were compared with matched controls. Only direct medical costs were reported. For the calculation of the incremental cost effectiveness ratio we extrapolated the data to 1,000 patients using concomitant PPIs and 1,000 patients not using PPIs for 1 year. Sensitivity analysis was performed by 'worst case' and 'best case' scenarios in which the 95% confidence interval (CI) of the odds ratio (OR) and the 95% CI of the cost estimate of a NSAID ulcer complication were varied. Costs of PPIs was varied separately. In all, 104 incident cases and 284 matched controls were identified from a cohort of 51,903 NSAID users with 10,402 NSAID exposition years. Use of PPIs was associated with an adjusted OR of 0.33 (95% CI 0.17 to 0.67; p = 0.002) for NSAID ulcer complications. In the extrapolation the estimated number of NSAID ulcer complications was 13.8 for non-PPI users and 3.6 for PPI users. The incremental total costs were euro 50,094 higher for concomitant PPIs use. The incremental cost effectiveness ratio was euro 4,907 per NSAID ulcer complication prevented when using the least costly PPIs. Concomitant use of PPIs for the prevention of NSAID ulcer complications costs euro 4,907 per NSAID ulcer complication prevented when using the least costly PPIs. The price of PPIs highly influenced the robustness of the results.
Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry
2011-03-01
For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512
30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your scenario must discuss how to respond to... drilling operations, the size of your worst case discharge scenario is the daily volume possible from an...
Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe
2015-01-01
We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
2011-12-16
25 Gain Over Direct Path- ~"- Wii j ’:.!. • ’- I Worst Case Loss = 6 dB for this h=1m target ^ 10’ 10 Resolved Pulse Width at -1 OdB...fundamental rejection (i.e. good balance ) is needed in the multiplier stage. The good news is that the last three approaches, and in particular, the... balanced mixers, SiGe baseband amplifiers, and 16-bit ADCs. Very high resolution (dynamic range) and high speed ADC’s are available at low cost and
Encapsulation materials research
NASA Technical Reports Server (NTRS)
Willis, P. B.
1984-01-01
Encapsulation materials for solar cells were investigated. The different phases consisted of: (1) identification and development of low cost module encapsulation materials; (2) materials reliability examination; and (3) process sensitivity and process development. It is found that outdoor photothermal aging devices (OPT) are the best accelerated aging methods, simulate worst case field conditions, evaluate formulation and module performance and have a possibility for life assessment. Outdoor metallic copper exposure should be avoided, self priming formulations have good storage stability, stabilizers enhance performance, and soil resistance treatment is still effective.
Fault-tolerant clock synchronization in distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.
1990-01-01
Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.
The impact of changing dental needs on cost savings from fluoridation.
Campain, A C; Mariño, R J; Wright, F A C; Harrison, D; Bailey, D L; Morgan, M V
2010-03-01
Although community water fluoridation has been one of the cornerstone strategies for the prevention and control of dental caries, questions are still raised regarding its cost-effectiveness. This study assessed the impact of changing dental needs on the cost savings from community water fluoridation in Australia. Net costs were estimated as Costs((programme)) minus Costs((averted caries).) Averted costs were estimated as the product of caries increment in non-fluoridated community, effectiveness of fluoridation and the cost of a carious surface. Modelling considered four age-cohorts: 6-20, 21-45, 46-65 and 66+ years and three time points 1970s, 1980s, and 1990s. Cost of a carious surface was estimated by conventional and complex methods. Real discount rates (4, 7 (base) and 10%) were utilized. With base-case assumptions, the average annual cost savings/person, using Australian dollars at the 2005 level, ranged from $56.41 (1970s) to $17.75 (1990s) (conventional method) and from $249.45 (1970s) to $69.86 (1990s) (complex method). Under worst-case assumptions fluoridation remained cost-effective with cost savings ranging from $24.15 (1970s) to $3.87 (1990s) (conventional method) and $107.85 (1970s) and $24.53 (1990s) (complex method). For 66+ years cohort (1990s) fluoridation did not show a cost saving, but costs/person were marginal. Community water fluoridation remains a cost-effective preventive measure in Australia.
Rail vs truck transport of biomass.
Mahmudi, Hamed; Flynn, Peter C
2006-01-01
This study analyzes the economics of transshipping biomass from truck to train in a North American setting. Transshipment will only be economic when the cost per unit distance of a second transportation mode is less than the original mode. There is an optimum number of transshipment terminals which is related to biomass yield. Transshipment incurs incremental fixed costs, and hence there is a minimum shipping distance for rail transport above which lower costs/km offset the incremental fixed costs. For transport by dedicated unit train with an optimum number of terminals, the minimum economic rail shipping distance for straw is 170 km, and for boreal forest harvest residue wood chips is 145 km. The minimum economic shipping distance for straw exceeds the biomass draw distance for economically sized centrally located power plants, and hence the prospects for rail transport are limited to cases in which traffic congestion from truck transport would otherwise preclude project development. Ideally, wood chip transport costs would be lowered by rail transshipment for an economically sized centrally located power plant, but in a specific case in Alberta, Canada, the layout of existing rail lines precludes a centrally located plant supplied by rail, whereas a more versatile road system enables it by truck. Hence for wood chips as well as straw the economic incentive for rail transport to centrally located processing plants is limited. Rail transshipment may still be preferred in cases in which road congestion precludes truck delivery, for example as result of community objections.
Bifacial PV cell with reflector for stand-alone mast for sensor powering purposes
NASA Astrophysics Data System (ADS)
Jakobsen, Michael L.; Thorsteinsson, Sune; Poulsen, Peter B.; Riedel, N.; Rødder, Peter M.; Rødder, Kristin
2017-09-01
Reflectors to bifacial PV-cells are simulated and prototyped in this work. The aim is to optimize the reflector to specific latitudes, and particularly northern latitudes. Specifically, by using minimum semiconductor area the reflector must be able to deliver the electrical power required at the condition of minimum solar travel above the horizon, worst weather condition etc. We will test a bifacial PV-module with a retroreflector, and compare the output with simulations combined with local solar data.
Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
Taylor, Lauren J; Nabozny, Michael J; Steffens, Nicole M; Tucholka, Jennifer L; Brasel, Karen J; Johnson, Sara K; Zelenski, Amy; Rathouz, Paul J; Zhao, Qianqian; Kwekkeboom, Kristine L; Campbell, Toby C; Schwarze, Margaret L
2017-06-01
Although many older adults prefer to avoid burdensome interventions with limited ability to preserve their functional status, aggressive treatments, including surgery, are common near the end of life. Shared decision making is critical to achieve value-concordant treatment decisions and minimize unwanted care. However, communication in the acute inpatient setting is challenging. To evaluate the proof of concept of an intervention to teach surgeons to use the Best Case/Worst Case framework as a strategy to change surgeon communication and promote shared decision making during high-stakes surgical decisions. Our prospective pre-post study was conducted from June 2014 to August 2015, and data were analyzed using a mixed methods approach. The data were drawn from decision-making conversations between 32 older inpatients with an acute nonemergent surgical problem, 30 family members, and 25 surgeons at 1 tertiary care hospital in Madison, Wisconsin. A 2-hour training session to teach each study-enrolled surgeon to use the Best Case/Worst Case communication framework. We scored conversation transcripts using OPTION 5, an observer measure of shared decision making, and used qualitative content analysis to characterize patterns in conversation structure, description of outcomes, and deliberation over treatment alternatives. The study participants were patients aged 68 to 95 years (n = 32), 44% of whom had 5 or more comorbid conditions; family members of patients (n = 30); and surgeons (n = 17). The median OPTION 5 score improved from 41 preintervention (interquartile range, 26-66) to 74 after Best Case/Worst Case training (interquartile range, 60-81). Before training, surgeons described the patient's problem in conjunction with an operative solution, directed deliberation over options, listed discrete procedural risks, and did not integrate preferences into a treatment recommendation. After training, surgeons using Best Case/Worst Case clearly presented a choice between treatments, described a range of postoperative trajectories including functional decline, and involved patients and families in deliberation. Using the Best Case/Worst Case framework changed surgeon communication by shifting the focus of decision-making conversations from an isolated surgical problem to a discussion about treatment alternatives and outcomes. This intervention can help surgeons structure challenging conversations to promote shared decision making in the acute setting.
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 28 2011-07-01 2011-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 29 2012-07-01 2012-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 29 2013-07-01 2013-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 28 2014-07-01 2014-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 1
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
Electrical characterization and qualification tests were performed on the RCA MWS5001D, 1024 by 1-bit, CMOS, random access memory. Characterization tests were performed on five devices. The tests included functional tests, AC parametric worst case pattern selection test, determination of worst-case transition for setup and hold times and a series of schmoo plots. The qualification tests were performed on 32 devices and included a 2000 hour burn in with electrical tests performed at 0 hours and after 168, 1000, and 2000 hours of burn in. The tests performed included functional tests and AC and DC parametric tests. All of the tests in the characterization phase, with the exception of the worst-case transition test, were performed at ambient temperatures of 25, -55 and 125 C. The worst-case transition test was performed at 25 C. The preburn in electrical tests were performed at 25, -55, and 125 C. All burn in endpoint tests were performed at 25, -40, -55, 85, and 125 C.
NASA Technical Reports Server (NTRS)
Lindsey, J. F.
1976-01-01
The isolation between the upper S-band quad antenna and the S-band payload antenna on the shuttle orbiter is calculated using a combination of plane surface and curved surface theories along with worst case values. A minimum value of 60 db isolation is predicted based on recent antenna pattern data, antenna locations on the orbiter, curvature effects, dielectric covering effects and edge effects of the payload bay. The calculated value of 60 db is significantly greater than the baseline value of 40 db. Use of the new value will result in the design of smaller, lighter weight and less expensive filters for S-band transponder and the S-band payload interrogator.
A simulation-optimization model for water-resources management, Santa Barbara, California
Nishikawa, Tracy
1998-01-01
In times of drought, the local water supplies of the city of Santa Barbara, California, are insufficient to satisfy water demand. In response, the city has built a seawater desalination plant and gained access to imported water in 1997. Of primary concern to the city is delivering water from the various sources at a minimum cost while satisfying water demand and controlling seawater intrusion that might result from the overpumping of ground water. A simulation-optimization model has been developed for the optimal management of Santa Barbara?s water resources. The objective is to minimize the cost of water supply while satisfying various physical and institutional constraints such as meeting water demand, maintaining minimum hydraulic heads at selected sites, and not exceeding water-delivery or pumping capacities. The model is formulated as a linear programming problem with monthly management periods and a total planning horizon of 5 years. The decision variables are water deliveries from surface water (Gibraltar Reservoir, Cachuma Reservoir, Cachuma Reservoir cumulative annual carryover, Mission Tunnel, State Water Project, and desalinated seawater) and ground water (13 production wells). The state variables are hydraulic heads. Basic assumptions for all simulations are that (1) the cost of water varies with source but is fixed over time, and (2) only existing or planned city wells are considered; that is, the construction of new wells is not allowed. The drought of 1947?51 is Santa Barbara?s worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. Assumptions that were made for this base case include a head constraint equal to sea level at the coastal nodes; Cachuma Reservoir carryover of 3,000 acre-feet per year, with a maximum carryover of 8,277 acre-feet; a maximum annual demand of 15,000 acre-feet; and average monthly capacities for the Cachuma and the Gibraltar Reservoirs. The base-case results indicate that water demands can be met, with little water required from the most expensive water source (desalinated seawater), at a total cost of $5.56 million over the 5-year planning horizon. The simulation model has drains, which operate as nonlinear functions of heads and could affect the model solutions. However, numerical tests show that the drains have little effect on the optimal solution. Sensitivity analyses on the base case yield the following results: If allowable Cachuma Reservoir carryover is decreased by about 50 percent, then costs increase by about 14 percent; if the peak demand is decreased by 7 percent, then costs will decrease by about 14 percent; if the head constraints are loosened to -30 feet, then the costs decrease by about 18 percent; if the heads are constrained such that a zero hydraulic gradient condition occurs at the ocean boundary, then the optimization problem does not have a solution; if the capacity of the desalination plant is constrained to zero acre-feet, then the cost increases by about 2 percent; and if the carryover of State Water Project water is implemented, then the cost decreases by about 0.5 percent. Four additional monthly diversion distribution scenarios for the reservoirs were tested: average monthly Cachuma Reservoir deliveries with the actual (scenario 1) and proposed (scenario 2) monthly distributions of Gibraltar Reservoir water, and variable monthly Cachuma Reservoir deliveries with the actual (scenario 3) and proposed (scenario 4) monthly distributions of Gibraltar Reservoir water. Scenario 1 resulted in a total cost of about $7.55 million, scenario 2 resulted in a total cost of about $5.07 million, and scenarios 3 and 4 resulted in a total cost of about $4.53 million. Sensitivities of the scenarios 1 and 2 to desalination-plant capacity and State Water Project water carryover were tested. The scenario 1 sensitivity analysis indicated that incorpo
48 CFR 16.405-1 - Cost-plus-incentive-fee contracts.
Code of Federal Regulations, 2011 CFR
2011-10-01
... provides for the initially negotiated fee to be adjusted later by a formula based on the relationship of... minimum fee that may be a zero fee or, in rare cases, a negative fee. (c) Limitations. No cost-plus...
Cost effectiveness of the Oregon quitline "free patch initiative".
Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John
2007-12-01
We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.
Olson, Scott A.; Medalie, Laura
1997-01-01
2 stone fill also protects the channel banks upstream and downstream of the bridge for a minimum distance of 17 feet from the respective bridge faces. Additional details describing conditions at the site are included in the Level II Summary and Appendices D and E. Scour depths and recommended rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1995). Total scour at a highway crossing is comprised of three components: 1) long-term streambed degradation; 2) contraction scour (due to accelerated flow caused by a reduction in flow area at a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute depths for contraction and local scour and a summary of the results of these computations follows. Contraction scour computed for all modelled flows ranged from 0.9 to 5.0 ft. The worst-case contraction scour occurred at the 500-year discharge. Computed left abutment scour ranged from 15.3 to 16.5 ft. with the worst-case scour occurring at the incipient roadway-overtopping discharge. Computed right abutment scour ranged from 6.0 to 8.7 ft. with the worst-case scour occurring at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Economic analysis of pandemic influenza vaccination strategies in Singapore.
Lee, Vernon J; Tok, Mei Yin; Chow, Vincent T; Phua, Kai Hong; Ooi, Eng Eong; Tambyah, Paul A; Chen, Mark I
2009-09-22
All influenza pandemic plans advocate pandemic vaccination. However, few studies have evaluated the cost-effectiveness of different vaccination strategies. This paper compares the economic outcomes of vaccination compared with treatment with antiviral agents alone, in Singapore. We analyzed the economic outcomes of pandemic vaccination (immediate vaccination and vaccine stockpiling) compared with treatment-only in Singapore using a decision-based model to perform cost-benefit and cost-effectiveness analyses. We also explored the annual insurance premium (willingness to pay) depending on the perceived risk of the next pandemic occurring. The treatment-only strategy resulted in 690 deaths, 13,950 hospitalization days, and economic cost of USD$497 million. For immediate vaccination, at vaccine effectiveness of >55%, vaccination was cost-beneficial over treatment-only. Vaccine stockpiling is not cost-effective in most scenarios even with 100% vaccine effectiveness. The annual insurance premium was highest with immediate vaccination, and was lower with increased duration to the next pandemic. The premium was also higher with higher vaccine effectiveness, attack rates, and case-fatality rates. Stockpiling with case-fatality rates of 0.4-0.6% would be cost-beneficial if vaccine effectiveness was >80%; while at case-fatality of >5% stockpiling would be cost-beneficial even if vaccine effectiveness was 20%. High-risk sub-groups warrant higher premiums than low-risk sub-groups. The actual pandemic vaccine effectiveness and lead time is unknown. Vaccine strategy should be based on perception of severity. Immediate vaccination is most cost-effective, but requires vaccines to be available when required. Vaccine stockpiling as insurance against worst-case scenarios is also cost-effective. Research and development is therefore critical to develop and stockpile cheap, readily available effective vaccines.
Cost-Effectiveness of Diagnostic Strategies for Suspected Scaphoid Fractures.
Yin, Zhong-Gang; Zhang, Jian-Bing; Gong, Ke-Tong
2015-08-01
The aim of this study was to assess the cost effectiveness of multiple competing diagnostic strategies for suspected scaphoid fractures. With published data, the authors created a decision-tree model simulating the diagnosis of suspected scaphoid fractures. Clinical outcomes, costs, and cost effectiveness of immediate computed tomography (CT), day 3 magnetic resonance imaging (MRI), day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, week 2 radiographs-MRI, week 2 radiographs-bone scan, and immediate MRI were evaluated. The primary clinical outcome was the detection of scaphoid fractures. The authors adopted societal perspective, including both the costs of healthcare and the cost of lost productivity. The incremental cost-effectiveness ratio (ICER), which expresses the incremental cost per incremental scaphoid fracture detected using a strategy, was calculated to compare these diagnostic strategies. Base case analysis, 1-way sensitivity analyses, and "worst case scenario" and "best case scenario" sensitivity analyses were performed. In the base case, the average cost per scaphoid fracture detected with immediate CT was $2553. The ICER of immediate MRI and day 3 MRI compared with immediate CT was $7483 and $32,000 per scaphoid fracture detected, respectively. The ICER of week 2 radiographs-MRI was around $170,000. Day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, and week 2 radiographs-bone scan strategy were dominated or extendedly dominated by MRI strategies. The results were generally robust in multiple sensitivity analyses. Immediate CT and MRI were the most cost-effective strategies for diagnosing suspected scaphoid fractures. Economic and Decision Analyses Level II. See Instructions for Authors for a complete description of levels of evidence.
A bioinspired collision detection algorithm for VLSI implementation
NASA Astrophysics Data System (ADS)
Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.
2005-06-01
In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.
Anselmino, Marco; Bammer, Tanja; Fernández Cebrián, José Maria; Daoud, Frederic; Romagnoli, Giuliano; Torres, Antonio
2009-11-01
This study aimed to establish a payer-perspective cost-effectiveness and budget impact model of adjustable gastric banding (AGB) and gastric bypass (GBP) vs. conventional treatment (CT) in patients with a body mass index (BMI) > or = 35 kg x m(-2) and type 2 diabetes mellitus (T2DM) in Austria, Italy, and Spain. A health economics model described in a previous publication was applied to resource utilization and cost data in AGB, GBP, and CT from Austria, Italy, and Spain in 2009. The base case time scope is 5 years; the annual discount rate for utilities and costs is 3.5%. In Austria and Italy, both AGB and GBP are cost-saving and are thus dominant in terms of incremental cost-effectiveness ratio compared to CT. In Spain, AGB and GBP yield a moderate cost increase but are cost-effective, assuming a willingness-to-pay threshold of 30,000 euro per quality adjusted life-year. Under worst-case analysis, AGB and GBP remain cost-saving or around breakeven in Austria and Italy and remain cost-effective in Spain. In patients with T2DM and BMI > or = 35 kg x m(-2) at 5-year follow-up vs. CT, AGB and GBP are not only clinically effective and safe but represent satisfactory value for money from a payer perspective in Austria, Italy, and Spain.
Query Optimization in Distributed Databases.
1982-10-01
general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are
NASA Astrophysics Data System (ADS)
Snoussi, Maria; Ouchani, Tachfine; Niazi, Saïda
2008-04-01
The eastern part of the Mediterranean coast of Morocco is physically and socio-economically vulnerable to accelerated sea-level rise, due to its low topography and its high ecological and touristic value. Assessment of the potential land loss by inundation has been based on empirical approaches using a minimum inundation level of 2 m and a maximum inundation level of 7 m, where scenarios for future sea-level rise range from 200 to 860 mm, with a 'best estimate' of 490 mm. The socio-economic impacts have been based on two possible alternative futures: (1) a 'worst-case' scenario, obtained by combining the 'economic development first' scenario with the maximum inundation level; and (2) a 'best-case' scenario, by combining the 'sustainability first' scenario with the minimum inundation level. Inundation analysis, based on Geographical Information Systems and a modelling approach to erosion, has identified both locations and the socioeconomic sectors that are most at risk to accelerated sea-level rise. Results indicate that 24% and 59% of the area will be lost by flooding at minimum and maximum inundation levels, respectively. The most severely impacted sectors are expected to be the residential and recreational areas, agricultural land, and the natural ecosystem. Shoreline erosion will affect 50% and 70% of the total area in 2050 and 2100, respectively. Potential strategies to ameliorate the impact of seawater inundation include: wetland preservation; beach nourishment at tourist resorts; and the afforestation of dunes. As this coast is planned to become one of the most developed tourist resorts in Morocco by 2010, measures such as building regulation, urban growth planning and development of an Integrated Coastal Zone Management Plan, are recommended for the region.
Grieger, Khara D; Hansen, Steffen F; Sørensen, Peter B; Baun, Anders
2011-09-01
Conducting environmental risk assessment of engineered nanomaterials has been an extremely challenging endeavor thus far. Moreover, recent findings from the nano-risk scientific community indicate that it is unlikely that many of these challenges will be easily resolved in the near future, especially given the vast variety and complexity of nanomaterials and their applications. As an approach to help optimize environmental risk assessments of nanomaterials, we apply the Worst-Case Definition (WCD) model to identify best estimates for worst-case conditions of environmental risks of two case studies which use engineered nanoparticles, namely nZVI in soil and groundwater remediation and C(60) in an engine oil lubricant. Results generated from this analysis may ultimately help prioritize research areas for environmental risk assessments of nZVI and C(60) in these applications as well as demonstrate the use of worst-case conditions to optimize future research efforts for other nanomaterials. Through the application of the WCD model, we find that the most probable worst-case conditions for both case studies include i) active uptake mechanisms, ii) accumulation in organisms, iii) ecotoxicological response mechanisms such as reactive oxygen species (ROS) production and cell membrane damage or disruption, iv) surface properties of nZVI and C(60), and v) acute exposure tolerance of organisms. Additional estimates of worst-case conditions for C(60) also include the physical location of C(60) in the environment from surface run-off, cellular exposure routes for heterotrophic organisms, and the presence of light to amplify adverse effects. Based on results of this analysis, we recommend the prioritization of research for the selected applications within the following areas: organism active uptake ability of nZVI and C(60) and ecotoxicological response end-points and response mechanisms including ROS production and cell membrane damage, full nanomaterial characterization taking into account detailed information on nanomaterial surface properties, and investigations of dose-response relationships for a variety of organisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Inhibition Of Washed Sludge With Sodium Nitrite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Congdon, J. W.; Lozier, J. S.
2012-09-25
This report describes the results of electrochemical tests used to determine the relationship between the concentration of the aggressive anions in washed sludge and the minimum effective inhibitor concentration. Sodium nitrate was added as the inhibitor because of its compatibility with the DWPF process. A minimum of 0.05M nitrite is required to inhibit the washed sludge simulant solution used in this study. When the worst case compositions and safety margins are considered, it is expected that a minimum operating limit of nearly 0.1M nitrite will be specified. The validity of this limit is dependent on the accuracy of the concentrationsmore » and solubility splits previously reported. Sodium nitrite additions to obtain 0.1M nitrite concentrations in washed sludge will necessitate the additional washing of washed precipitate in order to decrease its sodium nitrite inhibitor requirements sufficiently to remain below the sodium limits in the feed to the DWPF. Nitrite will be the controlling anion in "fresh" washed sludge unless the soluble chloride concentration is about ten times higher than predicted by the solubility splits. Inhibition of "aged" washed sludge will not be a problem unless significant chloride dissolution occurs during storage. It will be very important tomonitor the composition of washed sludge during processing and storage.« less
Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects
ERIC Educational Resources Information Center
Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor
2011-01-01
Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2014 CFR
2014-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2012 CFR
2012-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2013 CFR
2013-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
Balancing reliability and cost to choose the best power subsystem
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
A mathematical model is presented for computing total (spacecraft) subsystem cost including both the basic subsystem cost and the expected cost due to the failure of the subsystem. This model is then used to determine power subsystem cost as a function of reliability and redundancy. Minimum cost and maximum reliability and/or redundancy are not generally equivalent. Two example cases are presented. One is a small satellite, and the other is an interplanetary spacecraft.
Cost-utility of enoxaparin compared with unfractionated heparin in unstable coronary artery disease
Nicholson, Tricia; McGuire, Alistair; Milne, Ruairidh
2001-01-01
Background Low molecular weight heparins hold several advantages over unfractionated heparin including convenience of administration. Enoxaparin is one such heparin licensed in the UK for use in unstable coronary artery disease (unstable stable angina and non-Q wave myocardial infarction). In these patients, two large randomised controlled trials and their meta-analysis showed small benefits for enoxaparin over unfractionated heparin at 30–43 days and potentially at one year. We found no relevant published full economic evaluations, only cost studies, one of which was conducted in the UK. The other studies, from the US, Canada and France, are difficult to interpret since their resource use and costs may not reflect UK practice. Methods We aimed to compare the benefits and costs of short-term treatment (two to eight days) with enoxaparin and unfractionated heparin in unstable coronary artery disease. We used published data sources to estimate the incremental cost per quality adjusted life year (QALY), adopting a NHS perspective and using 1998 prices. Results The base case was a 0.013 QALY gain and net cost saving of £317 per person treated with enoxaparin instead of unfractionated heparin. All but one sensitivity analysis showed net savings and QALY gains, the exception (the worst case) being a cost per QALY of £3,305. Best cases were a £495 saving and 0.013 QALY gain, or a £317 saving and 0.014 QALY gain per person. Conclusions Enoxaparin appears cost saving compared with unfractionated heparin in patients with unstable coronary artery disease. However, cost implications depend on local revascularisation practice. PMID:11701090
Xie, Yu; Tan, Xiaodong; Shao, Haiyan; Liu, Qing; Tou, Jiyu; Zhang, Yuling; Luo, Qiong; Xiang, Qunying
2017-01-25
Screening is the main preventive method for cervical cancer in developing countries, but each type of screening has advantages and disadvantages. To investigate the most suitable method for low-income areas in China, we conducted a health economic analysis comparing three methods: visual inspection with acetic acid and Lugol's iodine (VIA/VILI), ThinPrep cytology test (TCT), and human papillomavirus (HPV) test. We recruited 3086 women aged 35-65 years using cluster random sampling. Each participant was randomly assigned to one of three cervical cancer screening groups: VIA/VILI, TCT, or HPV test. In order to calculate the number of disability-adjusted life years (DALYs) averted by each screening method, we used Markov models to estimate the natural development of cervical cancer over a 15-year period to estimate the age of onset and duration of each disease stage. The cost-effectiveness ratios (CERs), net present values (NPVs), benefit-cost ratios (BCRs), and cost-utility ratios (CURs) were used as outcomes in the health economic analysis. The positive detection rate in the VIA/VILI group was 1.39%, which was 4.6 and 2.0 times higher than the rates in the TCT and HPV test groups, respectively. The positive predictive value of VIA/VILI (10.53%) was highest while the rate of referral for colposcopy was lowest for those in the HPV + TCT group (0.60%). VIA/VILI performed the best in terms of health economic evaluation results, as the cost of per positive case detected was 8467.9 RMB, which was 24503.0 RMB lower than that for TCT and 5755.9 RMB lower than that for the HPV test. In addition, the NPV and BCR values were 258011.5 RMB and 3.18 (the highest), and the CUR was 2341.8 RMB (the lowest). The TCT performed the worst, since its NPV was <0 and the BCR was <1, indicative of being poorly cost-beneficial. With the best economic evaluation results and requiring minimum medical resources, VIA/VILI is recommended for cervical cancer screening in poverty-stricken areas in China with high incidence of cervical cancer and lack of medical resources.
41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...
Robust Flutter Margin Analysis that Incorporates Flight Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Martin J.
1998-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
Davies, Thomas W; Jenkins, Stuart R; Kingham, Rachel; Kenworthy, Joseph; Hawkins, Stephen J; Hiddink, Jan G
2011-01-01
Key ecosystem processes such as carbon and nutrient cycling could be deteriorating as a result of biodiversity loss. However, currently we lack the ability to predict the consequences of realistic species loss on ecosystem processes. The aim of this study was to test whether species contributions to community biomass can be used as surrogate measures of their contribution to ecosystem processes. These were gross community productivity in a salt marsh plant assemblage and an intertidal macroalgae assemblage; community clearance of microalgae in sessile suspension feeding invertebrate assemblage; and nutrient uptake in an intertidal macroalgae assemblage. We conducted a series of biodiversity manipulations that represented realistic species extinction sequences in each of the three contrasting assemblages. Species were removed in a subtractive fashion so that biomass was allowed to vary with each species removal, and key ecosystem processes were measured at each stage of community disassembly. The functional contribution of species was directly proportional to their contribution to community biomass in a 1:1 ratio, a relationship that was consistent across three contrasting marine ecosystems and three ecosystem processes. This suggests that the biomass contributed by a species to an assemblage can be used to approximately predict the proportional decline in an ecosystem process when that species is lost. Such predictions represent "worst case scenarios" because, over time, extinction resilient species can offset the loss of biomass associated with the extinction of competitors. We also modelled a "best case scenario" that accounts for compensatory responses by the extant species with the highest per capita contribution to ecosystem processes. These worst and best case scenarios could be used to predict the minimum and maximum species required to sustain threshold values of ecosystem processes in the future.
Davies, Thomas W.; Jenkins, Stuart R.; Kingham, Rachel; Kenworthy, Joseph; Hawkins, Stephen J.; Hiddink, Jan G.
2011-01-01
Key ecosystem processes such as carbon and nutrient cycling could be deteriorating as a result of biodiversity loss. However, currently we lack the ability to predict the consequences of realistic species loss on ecosystem processes. The aim of this study was to test whether species contributions to community biomass can be used as surrogate measures of their contribution to ecosystem processes. These were gross community productivity in a salt marsh plant assemblage and an intertidal macroalgae assemblage; community clearance of microalgae in sessile suspension feeding invertebrate assemblage; and nutrient uptake in an intertidal macroalgae assemblage. We conducted a series of biodiversity manipulations that represented realistic species extinction sequences in each of the three contrasting assemblages. Species were removed in a subtractive fashion so that biomass was allowed to vary with each species removal, and key ecosystem processes were measured at each stage of community disassembly. The functional contribution of species was directly proportional to their contribution to community biomass in a 1∶1 ratio, a relationship that was consistent across three contrasting marine ecosystems and three ecosystem processes. This suggests that the biomass contributed by a species to an assemblage can be used to approximately predict the proportional decline in an ecosystem process when that species is lost. Such predictions represent “worst case scenarios” because, over time, extinction resilient species can offset the loss of biomass associated with the extinction of competitors. We also modelled a “best case scenario” that accounts for compensatory responses by the extant species with the highest per capita contribution to ecosystem processes. These worst and best case scenarios could be used to predict the minimum and maximum species required to sustain threshold values of ecosystem processes in the future. PMID:22163297
Sartori, Ana Marli Christovam; de Soárez, Patrícia Coelho; Fernandes, Eder Gatti; Gryninger, Ligia Castellon Figueiredo; Viscondi, Juliana Yukari Kodaira; Novaes, Hillegonda Maria Dutilh
2016-03-18
Pertussis incidence has increased significantly in Brazil since 2011, despite high coverage of whole-cell pertussis containing vaccines in childhood. Infants <4 months are most affected. This study aimed to evaluate the cost-effectiveness of introducing universal maternal vaccination with tetanus-diphtheria-acellular pertussis vaccine (Tdap) into the National Immunization Program in Brazil. Economic evaluation using a decision tree model comparing two strategies: (1) universal vaccination with one dose of Tdap in the third trimester of pregnancy and (2) current practice (no pertussis maternal vaccination), from the perspective of the health system and society. An annual cohort of newborns representing the number of vaccinated pregnant women were followed for one year. Vaccine efficacy were based on literature review. Epidemiological, healthcare resource utilization and cost estimates were based on local data retrieved from Brazilian Health Information Systems. Costs of epidemiological investigation and treatment of contacts of cases were included in the analysis. No discount rate was applied to costs and benefits, as the temporal horizon was one year. Primary outcome was cost per life year saved (LYS). Univariate and best- and worst-case scenarios sensitivity analysis were performed. Maternal vaccination of one annual cohort, with vaccine effectiveness of 78%, and vaccine cost of USD$12.39 per dose, would avoid 661 cases and 24 infant deaths of pertussis, save 1800 years of life and cost USD$28,942,808 and USD$29,002,947, respectively, from the health system and societal perspective. The universal immunization would result in ICERs of USD$15,608 and USD$15,590 per LYS, from the health system and societal perspective, respectively. In sensitivity analysis, the ICER was most sensitive to discounting of life years saved, variation in case-fatality, disease incidence, vaccine cost, and vaccine effectiveness. The results indicate that universal maternal immunization with Tdap is a cost-effective intervention for preventing pertussis cases and deaths in infants in Brazil. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rubio-Terrés, C; Domínguez-Gil Hurlé, A
To carry out a cost-utility analysis of the treatment of relapsing-remitting multiple sclerosis (RRMS) with azathioprine (Imurel) or beta interferon (all, Avonex, Rebif and Betaferon). Pharmacoeconomic Markov model comparing treatment options by simulating the life of a hypothetical cohort of women aged 30, from the societal perspective. The transition probabilities, utilities, resource utilisation and costs (direct and indirect) were obtained from Spanish sources and from bibliography. Univariant sensitivity analyses of the base case were performed. In the base case analysis, the average cost per patient (euros in 2003) of a life treatment, considering a life expectancy of 53 years, would be 620,205, 1,047,836, 1,006,014, 1,161,638 and 968,157 euros with Imurel, all interferons, Avonex, Rebif and Betaferon, respectively. Therefore, the saving with Imurel would range between 327,000 and 520,000 euros approximately. The quality-adjusted life years (QALY) obtained with Imurel or interferons would be 10.08 and 9.30, respectively, with an average gain of 0.78 QALY per patient treated with Imurel. The sensitivity analyses confirmed the robustness of the base case. The cost of one additional QALY with interferons would range between 413,000 and 1,308,000 euros approximately in the hypothetical worst scenario for Imurel. For a typical patient with RRMS, treatment with Imurel would be more efficient than interferons and would dominate (would be more efficacious with lower costs) beta interferon.
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Discrete Fluctuations in Memory Erasure without Energy Cost
NASA Astrophysics Data System (ADS)
Croucher, Toshio; Bedkihal, Salil; Vaccaro, Joan A.
2017-02-01
According to Landauer's principle, erasing one bit of information incurs a minimum energy cost. Recently, Vaccaro and Barnett (VB) explored information erasure within the context of generalized Gibbs ensembles and demonstrated that for energy-degenerate spin reservoirs the cost of erasure can be solely in terms of a minimum amount of spin angular momentum and no energy. As opposed to the Landauer case, the cost of erasure in this case is associated with an intrinsically discrete degree of freedom. Here we study the discrete fluctuations in this cost and the probability of violation of the VB bound. We also obtain a Jarzynski-like equality for the VB erasure protocol. We find that the fluctuations below the VB bound are exponentially suppressed at a far greater rate and more tightly than for an equivalent Jarzynski expression for VB erasure. We expose a trade-off between the size of the fluctuations and the cost of erasure. We find that the discrete nature of the fluctuations is pronounced in the regime where reservoir spins are maximally polarized. We also state the first laws of thermodynamics corresponding to the conservation of spin angular momentum for this particular erasure protocol. Our work will be important for novel heat engines based on information erasure schemes that do not incur an energy cost.
Less than severe worst case accidents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, G.A.
1996-08-01
Many systems can provide tremendous benefit if operating correctly, produce only an inconvenience if they fail to operate, but have extreme consequences if they are only partially disabled such that they operate erratically or prematurely. In order to assure safety, systems are often tested against the most severe environments and accidents that are considered possible to ensure either safe operation or safe failure. However, it is often the less severe environments which result in the ``worst case accident`` since these are the conditions in which part of the system may be exposed or rendered unpredictable prior to total system failure.more » Some examples of less severe mechanical, thermal, and electrical environments which may actually be worst case are described as cautions for others in industries with high consequence operations or products.« less
Optimization of solar cell contacts by system cost-per-watt minimization
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.
Multidisciplinary tailoring of hot composite structures
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.; Chamis, Christos C.
1993-01-01
A computational simulation procedure is described for multidisciplinary analysis and tailoring of layered multi-material hot composite engine structural components subjected to simultaneous multiple discipline-specific thermal, structural, vibration, and acoustic loads. The effect of aggressive environments is also simulated. The simulation is based on a three-dimensional finite element analysis technique in conjunction with structural mechanics codes, thermal/acoustic analysis methods, and tailoring procedures. The integrated multidisciplinary simulation procedure is general-purpose including the coupled effects of nonlinearities in structure geometry, material, loading, and environmental complexities. The composite material behavior is assessed at all composite scales, i.e., laminate/ply/constituents (fiber/matrix), via a nonlinear material characterization hygro-thermo-mechanical model. Sample tailoring cases exhibiting nonlinear material/loading/environmental behavior of aircraft engine fan blades, are presented. The various multidisciplinary loads lead to different tailored designs, even those competing with each other, as in the case of minimum material cost versus minimum structure weight and in the case of minimum vibration frequency versus minimum acoustic noise.
Integrated Optoelectronic Networks for Application-Driven Multicore Computing
2017-05-08
hybrid photonic torus, the all-optical Corona crossbar, and the hybrid hierarchical Firefly crossbar. • The key challenges for waveguide photonics...improves SXR but with relatively higher EDP overhead. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and...photonic crossbar architectures ( Corona and Firefly) indicate that our approach improves worst-case signal-to-noise ratio (SNR) by up to 51.7
Space station ventilation study
NASA Technical Reports Server (NTRS)
Colombo, G. V.; Allen, G. E.
1972-01-01
A ventilation system design and selection method which is applicable to any manned vehicle were developed. The method was used to generate design options for the NASA 33-foot diameter space station, all of which meet the ventilation system design requirements. System characteristics such as weight, volume, and power were normalized to dollar costs for each option. Total system costs for the various options ranged from a worst case $8 million to a group of four which were all approximately $2 million. A system design was then chosen from the $2 million group and is presented in detail. A ventilation system layout was designed for the MSFC space station mockup which provided comfortable, efficient ventilation of the mockup. A conditioned air distribution system design for the 14-foot diameter modular space station, using the same techniques, is also presented. The tradeoff study resulted in the selection of a system which costs $1.9 million, as compared to the alternate configuration which would have cost $2.6 million.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.
Approximate solution of the p-median minimization problem
NASA Astrophysics Data System (ADS)
Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.
2016-09-01
A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.
Sharifi, Emile; Porco, Travis C; Naseri, Ayman
2009-10-01
To evaluate the cost-effectiveness of intracameral cefuroxime for postoperative endophthalmitis prophylaxis, and to determine the efficacy threshold necessary for alternative antibiotics to attain cost-effective equivalence with intracameral cefuroxime. Cost-effectiveness analysis. We study a hypothetical cohort of 100,000 patients undergoing cataract surgery as a part of the cost analysis. A cost-effectiveness model was constructed to analyze different antibiotic prophylactic regimens for postoperative endophthalmitis with intracameral cefuroxime as our base case. Efficacy was defined as the absolute reduction in rate of infection from background rate of infection, which was sourced from the literature. Antibiotic cost data were derived from the Red Book 2007 edition, and salary data were taken from the United States Bureau of Labor Statistics. Multivariate sensitivity analysis assessed the performance of antibiotic options under different scenarios. Cost per case of endophthalmitis prevented; theoretical maximal cost-effectiveness; efficacy threshold necessary to achieve cost-effective equivalence with intracameral cefuroxime; ratio indicating how many times more effective or less expensive alternative antibiotics would have to be to achieve cost-effective equivalence with intracameral cefuroxime. The cost-effectiveness ratio for intracameral cefuroxime is $1403 per case of postoperative endophthalmitis prevented. By comparison, the least expensive topical fluoroquinolone in our study, ciprofloxacin, would have to be >8 times more effective than intracameral cefuroxime to achieve cost-effective equivalence. The most expensive topical fluoroquinolones studied, gatifloxacin and moxifloxacin, would have to be > or =19 times more effective than intracameral cefuroxime to achieve cost-effective equivalence. A sensitivity analysis reveals that even in the worst case scenario for intracameral cefuroxime efficacy and with a 50% reduction in the cost of 4th-generation fluoroquinolones, gatifloxacin and moxifloxacin would have to be > or =9 times more effective than intracameral cefuroxime to achieve cost-effective equivalence. Administration of intracameral cefuroxime is relatively cost-effective in preventing endophthalmitis after cataract surgery. Owing to their high costs, many commonly used topical antibiotics are not cost-effective compared with intracameral cefuroxime, even under optimistic assumptions about their efficacy.
Resource Allocation and Resident Outcomes In Nursing Homes: Comparisons between the Best and Worst1
Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui-Fang
2005-01-01
The purpose of this study was to identify patterns of resource allocation that related to resident outcomes in nursing homes. Data on structure, staffing levels, salaries, cost, casemix, and resident outcomes were obtained from state-level, administrative databases on 494 nursing homes. We identified two sets of comparison groups and showed that the group of homes with the greatest percentage of improvement in resident outcomes had higher levels of RN staffing and higher costs. However, comparison groups based on best/worst average outcomes did not differ in resource allocation patterns. Additional analysis demonstrated that when controlling for RN staffing, resident outcomes in high and low cost homes did not differ. The results suggest that, although RN staffing is more expensive, it is key to improving resident outcomes. PMID:9679807
Sustainability of fisheries through marine reserves: a robust modeling analysis.
Doyen, L; Béné, C
2003-09-01
Among the many factors that contribute to overexploitation of marine fisheries, the role played by uncertainty is important. This uncertainty includes both the scientific uncertainties related to the resource dynamics or assessments and the uncontrollability of catches. Some recent works advocate for the use of marine reserves as a central element of future stock management. In the present paper, we study the influence of protected areas upon fisheries sustainability through a simple dynamic model integrating non-stochastic harvesting uncertainty and a constraint of safe minimum biomass level. Using the mathematical concept of invariance kernel in a robust and worst-case context, we examine through a formal modeling analysis how marine reserves might guarantee viable fisheries. We also show how sustainability requirement is not necessarily conflicting with optimization of catches. Numerical simulations are provided to illustrate the main findings.
Quality care means valuing care assistants, porters, and cleaners too.
Toynbee, P
2003-12-01
All too often, the focus of the very clever strategy papers produced in the upper reaches of the health department is on the next grand plan. Some of these reforms have been catastrophic for the quality of service that patients experience at ward level. Of these, the contracting out culture introduced in the 1980s and the 1990s has been the worst. Researching my book, Hard work-life in low pay Britain, I took six jobs at around the minimum wage, including work as a hospital porter, as a hospital cleaner, and as a care assistant. These are jobs at the sharp end, up close and very personal to the patients, strongly influencing their experiences of the services they were using. Yet they are low paid, undervalued jobs that fall below the radar of the policy makers. In hospitals they need to be brought back in-house and integrated into a team ethos. Paying these people more would cost more, but it would also harvest great rewards by using their untapped commitment.
Method of Generating Transient Equivalent Sink and Test Target Temperatures for Swift BAT
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2004-01-01
The NASA Swift mission has a 600-km altitude and a 22 degrees maximum inclination. The sun angle varies from 45 degrees to 180 degrees in normal operation. As a result, environmental heat fluxes absorbed by the Burst Alert Telescope (BAT) radiator and loop heat pipe (LHP) compensation chambers (CCs) vary transiently. Therefore the equivalent sink temperatures for the radiator and CCs varies transiently. In thermal performance verification testing in vacuum, the radiator and CCs radiated heat to sink targets. This paper presents an analytical technique for generating orbit transient equivalent sink temperatures and a technique for generating transient sink target temperatures for the radiator and LHP CCs. Using these techniques, transient target temperatures for the radiator and LHP CCs were generated for three thermal environmental cases: worst hot case, worst cold case, and cooldown and warmup between worst hot case in sunlight and worst cold case in the eclipse, and three different heat transport values: 128 W, 255 W, and 382 W. The 128 W case assumed that the two LHPs transport 255 W equally to the radiator. The 255 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator. The 382 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator, and has a 50% design margin. All these transient target temperatures were successfully implemented in the engineering test unit (ETU) LHP and flight LHP thermal performance verification tests in vacuum.
Heart failure disease management programs: a cost-effectiveness analysis.
Chan, David C; Heidenreich, Paul A; Weinstein, Milton C; Fonarow, Gregg C
2008-02-01
Heart failure (HF) disease management programs have shown impressive reductions in hospitalizations and mortality, but in studies limited to short time frames and high-risk patient populations. Current guidelines thus only recommend disease management targeted to high-risk patients with HF. This study applied a new technique to infer the degree to which clinical trials have targeted patients by risk based on observed rates of hospitalization and death. A Markov model was used to assess the incremental life expectancy and cost of providing disease management for high-risk to low-risk patients. Sensitivity analyses of various long-term scenarios and of reduced effectiveness in low-risk patients were also considered. The incremental cost-effectiveness ratio of extending coverage to all patients was $9700 per life-year gained in the base case. In aggregate, universal coverage almost quadrupled life-years saved as compared to coverage of only the highest quintile of risk. A worst case analysis with simultaneous conservative assumptions yielded an incremental cost-effectiveness ratio of $110,000 per life-year gained. In a probabilistic sensitivity analysis, 99.74% of possible incremental cost-effectiveness ratios were <$50,000 per life-year gained. Heart failure disease management programs are likely cost-effective in the long-term along the whole spectrum of patient risk. Health gains could be extended by enrolling a broader group of patients with HF in disease management.
Humblet, Marie-France; Vandeputte, Sébastien; Fecher-Bourgeois, Fabienne; Léonard, Philippe; Gosset, Christiane; Balenghien, Thomas; Durand, Benoît; Saegerman, Claude
2016-01-01
This study aimed at estimating, in a prospective scenario, the potential economic impact of a possible epidemic of WNV infection in Belgium, based on 2012 values for the equine and human health sectors, in order to increase preparedness and help decision-makers. Modelling of risk areas, based on the habitat suitable for Culex pipiens, the main vector of the virus, allowed us to determine equine and human populations at risk. Characteristics of the different clinical forms of the disease based on past epidemics in Europe allowed morbidity among horses and humans to be estimated. The main costs for the equine sector were vaccination and replacement value of dead or euthanised horses. The choice of the vaccination strategy would have important consequences in terms of cost. Vaccination of the country’s whole population of horses, based on a worst-case scenario, would cost more than EUR 30 million; for areas at risk, the cost would be around EUR 16–17 million. Regarding the impact on human health, short-term costs and socio-economic losses were estimated for patients who developed the neuroinvasive form of the disease, as no vaccine is available yet for humans. Hospital charges of around EUR 3,600 for a case of West Nile neuroinvasive disease and EUR 4,500 for a case of acute flaccid paralysis would be the major financial consequence of an epidemic of West Nile virus infection in humans in Belgium. PMID:27526394
Humblet, Marie-France; Vandeputte, Sébastien; Fecher-Bourgeois, Fabienne; Léonard, Philippe; Gosset, Christiane; Balenghien, Thomas; Durand, Benoît; Saegerman, Claude
2016-08-04
This study aimed at estimating, in a prospective scenario, the potential economic impact of a possible epidemic of WNV infection in Belgium, based on 2012 values for the equine and human health sectors, in order to increase preparedness and help decision-makers. Modelling of risk areas, based on the habitat suitable for Culex pipiens, the main vector of the virus, allowed us to determine equine and human populations at risk. Characteristics of the different clinical forms of the disease based on past epidemics in Europe allowed morbidity among horses and humans to be estimated. The main costs for the equine sector were vaccination and replacement value of dead or euthanised horses. The choice of the vaccination strategy would have important consequences in terms of cost. Vaccination of the country's whole population of horses, based on a worst-case scenario, would cost more than EUR 30 million; for areas at risk, the cost would be around EUR 16-17 million. Regarding the impact on human health, short-term costs and socio-economic losses were estimated for patients who developed the neuroinvasive form of the disease, as no vaccine is available yet for humans. Hospital charges of around EUR 3,600 for a case of West Nile neuroinvasive disease and EUR 4,500 for a case of acute flaccid paralysis would be the major financial consequence of an epidemic of West Nile virus infection in humans in Belgium. This article is copyright of The Authors, 2016.
1991-01-01
EXPERIENCE IN DEVELOPING INTEGRATED OPTICAL DEVICES, NONLINEAR MAGNETIC-OPTIC MATERIALS, HIGH FREQUENCY MODULATORS, COMPUTER-AIDED MODELING AND SOPHISTICATED... HIGH -LEVEL PRESENTATION AND DISTRIBUTED CONTROL MODELS FOR INTEGRATING HETEROGENEOUS MECHANICAL ENGINEERING APPLICATIONS AND TOOLS. THE DESIGN IS FOCUSED...STATISTICALLY ACCURATE WORST CASE DEVICE MODELS FOR CIRCUIT SIMULATION. PRESENT METHODS OF WORST CASE DEVICE DESIGN ARE AD HOC AND DO NOT ALLOW THE
Economic optimization of the energy transport component of a large distributed solar power plant
NASA Technical Reports Server (NTRS)
Turner, R. H.
1976-01-01
A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.
DEVELOPMENT OF A LOW-COST INFERENTIAL NATURAL GAS ENERGY FLOW RATE PROTOTYPE RETROFIT MODULE
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Kelner; T.E. Owen; D.L. George
2004-03-01
In 1998, Southwest Research Institute{reg_sign} began a multi-year project co-funded by the Gas Research Institute (GRI) and the U.S. Department of Energy. The project goal is to develop a working prototype instrument module for natural gas energy measurement. The module will be used to retrofit a natural gas custody transfer flow meter for energy measurement, at a cost an order of magnitude lower than a gas chromatograph. Development and evaluation of the prototype retrofit natural gas energy flow meter in 2000-2001 included: (1) evaluation of the inferential gas energy analysis algorithm using supplemental gas databases and anticipated worst-case gas mixtures;more » (2) identification and feasibility review of potential sensing technologies for nitrogen diluent content; (3) experimental performance evaluation of infrared absorption sensors for carbon dioxide diluent content; and (4) procurement of a custom ultrasonic transducer and redesign of the ultrasonic pulse reflection correlation sensor for precision speed-of-sound measurements. A prototype energy meter module containing improved carbon dioxide and speed-of-sound sensors was constructed and tested in the GRI Metering Research Facility at SwRI. Performance of this module using transmission-quality natural gas and gas containing supplemental carbon dioxide up to 9 mol% resulted in gas energy determinations well within the inferential algorithm worst-case tolerance of {+-}2.4 Btu/scf (nitrogen diluent gas measured by gas chromatograph). A two-week field test was performed at a gas-fired power plant to evaluate the inferential algorithm and the data acquisition requirements needed to adapt the prototype energy meter module to practical field site conditions.« less
Local measles vaccination gaps in Germany and the role of vaccination providers.
Eichner, Linda; Wjst, Stephanie; Brockmann, Stefan O; Wolfers, Kerstin; Eichner, Martin
2017-08-14
Measles elimination in Europe is an urgent public health goal, yet despite the efforts of its member states, vaccination gaps and outbreaks occur. This study explores local vaccination heterogeneity in kindergartens and municipalities of a German county. Data on children from mandatory school enrolment examinations in 2014/15 in Reutlingen county were used. Children with unknown vaccination status were either removed from the analysis (best case) or assumed to be unvaccinated (worst case). Vaccination data were translated into expected outbreak probabilities. Physicians and kindergartens with statistically outstanding numbers of under-vaccinated children were identified. A total of 170 (7.1%) of 2388 children did not provide a vaccination certificate; 88.3% (worst case) or 95.1% (best case) were vaccinated at least once against measles. Based on the worst case vaccination coverage, <10% of municipalities and <20% of kindergartens were sufficiently vaccinated to be protected against outbreaks. Excluding children without a vaccination certificate (best case) leads to over-optimistic views: the overall outbreak probability in case of a measles introduction lies between 39.5% (best case) and 73.0% (worst case). Four paediatricians were identified who accounted for 41 of 109 unvaccinated children and for 47 of 138 incomplete vaccinations; GPs showed significantly higher rates of missing vaccination certificates and unvaccinated or under-vaccinated children than paediatricians. Missing vaccination certificates pose a severe problem regarding the interpretability of vaccination data. Although the coverage for at least one measles vaccination is higher in the studied county than in most South German counties and higher than the European average, many severe and potentially dangerous vaccination gaps occur locally. If other federal German states and EU countries show similar vaccination variability, measles elimination may not succeed in Europe.
Cost-effectiveness of HPV vaccination in the prevention of cervical cancer in Malaysia.
Ezat, Wan Puteh Sharifa; Aljunid, Syed
2010-01-01
Cervical cancers (CC) demonstrate the second highest incidence of female cancers in Malaysia. The costs of chronic management have a high impact on nation's health cost and patient's quality of life that can be avoided by better screening and HPV vaccination. Respondents were interviewed from six public Gynecology-Oncology hospitals. Methods include experts' panel discussions to estimate treatment costs by severity and direct interviews with respondents using costing and SF-36 quality of life (QOL) questionnaires. Three options were compared i.e. screening via Pap smear; quadrivalent HPV Vaccination and combined strategy (screening plus vaccination). Scenario based sensitivity analysis using screening population coverage (40-80%) and costs of vaccine (RM 300-400/dose) were calculated. 502 cervical pre invasive and invasive cervical cancer (ICC) patients participated in the study. Mean age was 53.3 +/- 11.2 years, educated till secondary level (39.4%), Malays (44.2%) and married for 27.73 +/- 12.1 years. Life expectancy gained from vaccination is 13.04 years and average Quality Adjusted Life Years saved (QALYs) is 24.4 in vaccinated vs 6.29 in unvaccinated. Cost/QALYs for Pap smear at base case is RM 1,214.96/QALYs and RM 1,100.01 at increased screening coverage; for HPV Vaccination base case is at RM 35,346.79 and RM 46,530.08 when vaccination price is higher. In combined strategy, base case is RM 11,289.58; RM 7,712.74 at best case and RM 14,590.37 at worst case scenario. Incremental cost-effectiveness ratio (ICER) showed that screening at 70% coverage or higher is highly cost effective at RM 946.74 per QALYs saved and this is followed by combined strategy at RM 35,346.67 per QALYs saved. Vaccination increase life expectancy with better QOL of women when cancer can be avoided. Cost effective strategies will include increasing the Pap smear coverage to 70% or higher. Since feasibility and long term screening adherence is doubtful among Malaysian women, vaccination of young women is a more cost effective strategy against cervical cancers.
Worst case estimation of homology design by convex analysis
NASA Technical Reports Server (NTRS)
Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.
1998-01-01
The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.
Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios
NASA Astrophysics Data System (ADS)
Kluess, D.; Mittelmeier, W.; Bader, R.
2009-12-01
In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.
Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios
NASA Astrophysics Data System (ADS)
Kluess, D.; Mittelmeier, W.; Bader, R.
2010-03-01
In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.
Dima, Giovanna; Verzera, Antonella; Grob, Koni
2011-11-01
Party plates made of recycled paperboard with a polyolefin film on the food contact surface (more often polypropylene than polyethylene) were tested for migration of mineral oil into various foods applying reasonable worst case conditions. The worst case was identified as a slice of fried meat placed onto the plate while hot and allowed to cool for 1 h. As it caused the acceptable daily intake (ADI) specified by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) to be exceeded, it is concluded that recycled paperboard is generally acceptable for party plates only when separated from the food by a functional barrier. Migration data obtained with oil as simulant at 70°C was compared to the migration into foods. A contact time of 30 min was found to reasonably cover the worst case determined in food.
Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J
2013-04-16
In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
Davis, Michael J; Janke, Robert
2018-01-04
The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.
NASA Astrophysics Data System (ADS)
Davis, Michael J.; Janke, Robert
2018-05-01
The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.
Nursing home case-mix reimbursement in Mississippi and South Dakota.
Arling, Greg; Daneman, Barry
2002-04-01
To evaluate the effects of nursing home case-mix reimbursement on facility case mix and costs in Mississippi and South Dakota. Secondary data from resident assessments and Medicaid cost reports from 154 Mississippi and 107 South Dakota nursing facilities in 1992 and 1994, before and after implementation of new case-mix reimbursement systems. The study relied on a two-wave panel design to examine case mix (resident acuity) and direct care costs in 1-year periods before and after implementation of a nursing home case-mix reimbursement system. Cross-lagged regression models were used to assess change in case mix and costs between periods while taking into account facility characteristics. Facility-level measures were constructed from Medicaid cost reports and Minimum Data Set-Plus assessment records supplied by each state. Resident case mix was based on the RUG-III classification system. Facility case-mix scores and direct care costs increased significantly between periods in both states. Changes in facility costs and case mix were significantly related in a positive direction. Medicare utilization and the rate of hospitalizations from the nursing facility also increased significantly between periods, particularly in Mississippi. The case-mix reimbursement systems appeared to achieve their intended goals: improved access for heavy-care residents and increased direct care expenditures in facilities with higher acuity residents. However, increases in Medicare utilization may have influenced facility case mix or costs, and some facilities may have been unprepared to care for higher acuity residents, as indicated by increased rates of hospitalization.
Richter, Ann-Kathrin; Klimek, Ludger; Merk, Hans F; Mülleneisen, Norbert; Renz, Harald; Wehrmann, Wolfgang; Werfel, Thomas; Hamelmann, Eckard; Siebert, Uwe; Sroczynski, Gaby; Wasem, Jürgen; Biermann-Stallwitz, Janine
2018-03-24
Specific immunotherapy is the only causal treatment in respiratory allergy. Due to high treatment cost and possible severe side effects subcutaneous immunotherapy (SCIT) is not indicated in all patients. Nevertheless, reported treatment rates seem to be low. This study aims to analyze the effects of increasing treatment rates of SCIT in respiratory allergy in terms of costs and quality-adjusted life years (QALYs). A state-transition Markov model simulates the course of disease of patients with allergic rhinitis, allergic asthma and both diseases over 10 years including a symptom-free state and death. Treatment comprises symptomatic pharmacotherapy alone or combined with SCIT. The model compares two strategies of increased and status quo treatment rates. Transition probabilities are based on routine data. Costs are calculated from the societal perspective applying German unit costs to literature-derived resource consumption. QALYs are determined by translating the mean change in non-preference-based quality of life scores to a change in utility. Key parameters are subjected to deterministic sensitivity analyses. Increasing treatment rates is a cost-effective strategy with an incremental cost-effectiveness ratio (ICER) of 3484€/QALY compared to the status quo. The most influential parameters are SCIT discontinuation rates, treatment effects on the transition probabilities and cost of SCIT. Across all parameter variations, the best case leads to dominance of increased treatment rates while the worst case ICER is 34,315€/QALY. Excluding indirect cost leads to a twofold increase in the ICER. Measures to increase SCIT initiation rates should be implemented and also address improving adherence.
The Pension Pac-Man: How Pension Debt Eats Away at Teacher Salaries
ERIC Educational Resources Information Center
Aldeman, Chad
2016-01-01
Why aren't teacher salaries rising? This puzzle can be explained by three trends eating into teachers' take-home pay: rising health care costs, declining student/teacher ratios, and rising retirement costs. Retirement costs are the most hidden of these three factors. The result is that most teachers are getting the worst of both worlds. Teachers…
Sensitivity of worst-case strom surge considering influence of climate change
NASA Astrophysics Data System (ADS)
Takayabu, Izuru; Hibino, Kenshi; Sasaki, Hidetaka; Shiogama, Hideo; Mori, Nobuhito; Shibutani, Yoko; Takemi, Tetsuya
2016-04-01
There are two standpoints when assessing risk caused by climate change. One is how to prevent disaster. For this purpose, we get probabilistic information of meteorological elements, from enough number of ensemble simulations. Another one is to consider disaster mitigation. For this purpose, we have to use very high resolution sophisticated model to represent a worst case event in detail. If we could use enough computer resources to drive many ensemble runs with very high resolution model, we can handle these all themes in one time. However resources are unfortunately limited in most cases, and we have to select the resolution or the number of simulations if we design the experiment. Applying PGWD (Pseudo Global Warming Downscaling) method is one solution to analyze a worst case event in detail. Here we introduce an example to find climate change influence on the worst case storm-surge, by applying PGWD to a super typhoon Haiyan (Takayabu et al, 2015). 1 km grid WRF model could represent both the intensity and structure of a super typhoon. By adopting PGWD method, we can only estimate the influence of climate change on the development process of the Typhoon. Instead, the changes in genesis could not be estimated. Finally, we drove SU-WAT model (which includes shallow water equation model) to get the signal of storm surge height. The result indicates that the height of the storm surge increased up to 20% owing to these 150 years climate change.
Nothing's Free: Calculating the Cost of Volunteers
ERIC Educational Resources Information Center
Ingle, W. Kyle
2010-01-01
Most school district administrators recognize the benefits of using parent and community volunteers, including improved school-community relations. But volunteers are not cost free. At their best, volunteers can be a valuable resource for schools and districts. At their worst, volunteers can consume already limited resources. However, their use…
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 4, Appendix C
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Statistical analysis data is supplied along with write pulse width, read cycle time, write cycle time, and chip enable time data.
Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.
Lazy checkpoint coordination for bounding rollback propagation
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Fuchs, W. Kent
1992-01-01
Independent checkpointing allows maximum process autonomy but suffers from potential domino effects. Coordinated checkpointing eliminates the domino effect by sacrificing a certain degree of process autonomy. In this paper, we propose the technique of lazy checkpoint coordination which preserves process autonomy while employing communication-induced checkpoint coordination for bounding rollback propagation. The introduction of the notion of laziness allows a flexible trade-off between the cost for checkpoint coordination and the average rollback distance. Worst-case overhead analysis provides a means for estimating the extra checkpoint overhead. Communication trace-driven simulation for several parallel programs is used to evaluate the benefits of the proposed scheme for real applications.
Vapor hydrogen peroxide as alternative to dry heat microbial reduction
NASA Astrophysics Data System (ADS)
Chung, S.; Kern, R.; Koukol, R.; Barengoltz, J.; Cash, H.
2008-09-01
The Jet Propulsion Laboratory (JPL), in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal was to include this technique, with an appropriate specification, in NASA Procedural Requirements 8020.12 as a low-temperature complementary technique to the dry heat sterilization process. The VHP process is widely used by the medical industry to sterilize surgical instruments and biomedical devices, but high doses of VHP may degrade the performance of flight hardware, or compromise material compatibility. The goal for this study was to determine the minimum VHP process conditions for planetary protection acceptable microbial reduction levels. Experiments were conducted by the STERIS Corporation, under contract to JPL, to evaluate the effectiveness of vapor hydrogen peroxide for the inactivation of the standard spore challenge, Geobacillus stearothermophilus. VHP process parameters were determined that provide significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. In addition to the obvious process parameters of interest: hydrogen peroxide concentration, number of injection cycles, and exposure duration, the investigation also considered the possible effect on lethality of environmental parameters: temperature, absolute humidity, and material substrate. This study delineated a range of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D-value may be imposed, a process humidity range for which the worst case D-value may be imposed, and the dependence on selected spacecraft material substrates. The derivation of D-values from the lethality data permitted conservative planetary protection recommendations.
Probabilistic Models for Solar Particle Events
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.
2009-01-01
Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.
A Multidimensional Assessment of Children in Conflictual Contexts: The Case of Kenya
ERIC Educational Resources Information Center
Okech, Jane E. Atieno
2012-01-01
Children in Kenya's Kisumu District Primary Schools (N = 430) completed three measures of trauma. Respondents completed the "My Worst Experience Scale" (MWES; Hyman and Snook 2002) and its supplement, the "School Alienation and Trauma Survey" (SATS; Hyman and Snook 2002), sharing their worst experiences overall and specifically…
Analysis of on-orbit thermal characteristics of the 15-meter hoop/column antenna
NASA Technical Reports Server (NTRS)
Andersen, Gregory C.; Farmer, Jeffery T.; Garrison, James
1987-01-01
In recent years, interest in large deployable space antennae has led to the development of the 15 meter hoop/column antenna. The thermal environment the antenna is expected to experience during orbit is examined and the temperature distributions leading to reflector surface distortion errors are determined. Two flight orientations corresponding to: (1) normal operation, and (2) use in a Shuttle-attached flight experiment are examined. A reduced element model was used to determine element temperatures at 16 orbit points for both flight orientations. The temperature ranged from a minimum of 188 K to a maximum of 326 K. Based on the element temperatures, orbit position leading to possible worst case surface distortions were determined, and the subsequent temperatures were used in a static finite element analysis to quantify surface control cord deflections. The predicted changes in the control cord lengths were in the submillimeter ranges.
Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence
2017-11-01
When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.
Comparative cost-effectiveness of HPV vaccines in the prevention of cervical cancer in Malaysia.
Ezat, Sharifa W P; Aljunid, Syed
2010-01-01
Cervical cancer (CC) had the second highest incidence of female cancers in Malaysia in 2003-2006. Prevention is possible by both Pap smear screening and HPV vaccination with either the bivalent vaccine (BV) or the quadrivalent vaccine (QV). In the present study, cost effectiveness options were compared for three programs i.e. screening via Pap smear; modeling of HPV vaccination (QV and BV) and combined strategy (screening plus vaccination). A scenario based sensitivity analysis was conducted using screening population coverages (40-80%) and costs of vaccines (RM 100-200/dose) were calculated. This was an economic burden, cross sectional study in 2006-2009 of respondents interviewed from six public Gynecology-Oncology hospitals. Methods included expert panel discussions to estimate treatment costs of CC, genital warts and vulva/vagina cancers by severity and direct interviews with respondents using costing and SF-36 quality of life questionnaires. A total of 502 cervical cancer patients participated with a mean age at 53.3±11.2 years and a mean marriage length of 27.7±12.1 years, Malays accounting for 44.2%. Cost/quality adjusted life year (QALY) for Pap smear in the base case was RM 1,215 and RM 1,100 at increased screening coverage. With QV only, in base case it was RM 15,662 and RM 24,203 when the vaccination price was increased. With BV only, the respective figures were RM 1,359,057 and RM 2,530,018. For QV combined strategy cost/QALY in the base case it was RM 4,937, reducing to RM 3,395 in the best case and rising to RM 7,992 in the worst case scenario. With the BV combined strategy, these three cost/QALYs were RM 6,624, RM 4,033 and RM 10,543. Incremental cost-effectiveness ratio (ICER) showed that screening at 70% coverage or higher was highly cost effective at RM 946.74 per QALYs saved but this was preceded by best case combined strategy with QV at RM 515.29 per QALYs saved. QV is more cost effective than BV. The QV combined strategy had a higher CE than any method including Pap smear screening at high population coverage.
Development of a nurse case management service: a proposed business plan for rural hospitals.
Adams, Marsha Howell; Crow, Carolyn S
2005-01-01
The nurse case management service (NCMS) for rural hospitals is an entrepreneurial endeavor designed to provide rural patients with quality, cost-effective healthcare. This article describes the development of an NCMS. A detailed marketing and financial plan, a review of industry trends, and the legal structure and risks associated with the development of the venture are presented. The financial plan projects a minimum savings of 223,200 dollars for rural institutions annually. To improve quality and reduce cost for rural hospitals, the authors recommend implementation of an NCMS.
Field, Nigel; Amirthalingam, Gayatri; Waight, Pauline; Andrews, Nick; Ladhani, Shamez N.; van Hoek, Albert Jan; Maple, Peter A.C.; Brown, Kevin E.; Miller, Elizabeth
2014-01-01
Introduction In the UK, primary varicella is usually a mild infection in children, but can cause serious illness in susceptible pregnant women and adults. The UK Joint Committee on Vaccination and Immunisation is considering an adolescent varicella vaccination programme. Cost-effectiveness depends upon identifying susceptibles and minimising vaccine wastage, and chickenpox history is one method to screen for eligibility. To inform this approach, we estimated the proportion of adolescents with varicella antibodies by reported chickenpox history. Methods Recruitment occurred through secondary schools in England from February to September 2012. Parents were asked about their child's history of chickenpox, explicitly setting the context in terms of the implications for vaccination. 247 adolescents, whose parents reported positive (120), negative (77) or uncertain (50) chickenpox history provided oral fluid for varicella zoster virus-specific immunoglobulin-G (VZV-IgG) testing. Results 109 (90.8% [85.6–96.0%]) adolescents with a positive chickenpox history, 52 (67.5% [57.0–78.1%]) with a negative history and 42 (84.0% [73.7–94.3%]) with an uncertain history had VZV-IgG suggesting prior infection. Combining negative and uncertain histories, 74% had VZV-IgG (best-case). When discounting low total-IgG samples and counting equivocals as positive (worst-case), 84% had VZV-IgG. We also modelled outcomes by varying the negative predictive value (NPV) for the antibody assay, and found 74–87% under the best-case and 84–92% under the worst-case scenario would receive vaccine unnecessarily as NPV falls to 50%. Conclusion Reported chickenpox history discriminates between varicella immunity and susceptibility in adolescents, but significant vaccine wastage would occur if this approach alone were used to determine vaccine eligibility. A small but important proportion of those with positive chickenpox history would remain susceptible. These data are needed to determine whether reported history, with or without oral fluid testing in those with negative and uncertain history, is sufficiently discriminatory to underpin a cost-effective adolescent varicella vaccination programme. PMID:23871823
Field, Nigel; Amirthalingam, Gayatri; Waight, Pauline; Andrews, Nick; Ladhani, Shamez N; van Hoek, Albert Jan; Maple, Peter A C; Brown, Kevin E; Miller, Elizabeth
2014-02-26
In the UK, primary varicella is usually a mild infection in children, but can cause serious illness in susceptible pregnant women and adults. The UK Joint Committee on Vaccination and Immunisation is considering an adolescent varicella vaccination programme. Cost-effectiveness depends upon identifying susceptibles and minimising vaccine wastage, and chickenpox history is one method to screen for eligibility. To inform this approach, we estimated the proportion of adolescents with varicella antibodies by reported chickenpox history. Recruitment occurred through secondary schools in England from February to September 2012. Parents were asked about their child's history of chickenpox, explicitly setting the context in terms of the implications for vaccination. 247 adolescents, whose parents reported positive (120), negative (77) or uncertain (50) chickenpox history provided oral fluid for varicella zoster virus-specific immunoglobulin-G (VZV-IgG) testing. 109 (90.8% [85.6-96.0%]) adolescents with a positive chickenpox history, 52 (67.5% [57.0-78.1%]) with a negative history and 42 (84.0% [73.7-94.3%]) with an uncertain history had VZV-IgG suggesting prior infection. Combining negative and uncertain histories, 74% had VZV-IgG (best-case). When discounting low total-IgG samples and counting equivocals as positive (worst-case), 84% had VZV-IgG. We also modelled outcomes by varying the negative predictive value (NPV) for the antibody assay, and found 74-87% under the best-case and 84-92% under the worst-case scenario would receive vaccine unnecessarily as NPV falls to 50%. Reported chickenpox history discriminates between varicella immunity and susceptibility in adolescents, but significant vaccine wastage would occur if this approach alone were used to determine vaccine eligibility. A small but important proportion of those with positive chickenpox history would remain susceptible. These data are needed to determine whether reported history, with or without oral fluid testing in those with negative and uncertain history, is sufficiently discriminatory to underpin a cost-effective adolescent varicella vaccination programme. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Tolerance allocation for an electronic system using neural network/Monte Carlo approach
NASA Astrophysics Data System (ADS)
Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque
2001-12-01
The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.
Mühlbacher, Axel C; Kaczynski, Anika; Zweifel, Peter; Johnson, F Reed
2016-12-01
Best-worst scaling (BWS), also known as maximum-difference scaling, is a multiattribute approach to measuring preferences. BWS aims at the analysis of preferences regarding a set of attributes, their levels or alternatives. It is a stated-preference method based on the assumption that respondents are capable of making judgments regarding the best and the worst (or the most and least important, respectively) out of three or more elements of a choice-set. As is true of discrete choice experiments (DCE) generally, BWS avoids the known weaknesses of rating and ranking scales while holding the promise of generating additional information by making respondents choose twice, namely the best as well as the worst criteria. A systematic literature review found 53 BWS applications in health and healthcare. This article expounds possibilities of application, the underlying theoretical concepts and the implementation of BWS in its three variants: 'object case', 'profile case', 'multiprofile case'. This paper contains a survey of BWS methods and revolves around study design, experimental design, and data analysis. Moreover the article discusses the strengths and weaknesses of the three types of BWS distinguished and offered an outlook. A companion paper focuses on special issues of theory and statistical inference confronting BWS in preference measurement.
ENVIRONMENTAL ECONOMICS FOR WATERSHED RESTORATION
This book overviews non-market valuation, input-output analysis, cost-benefit analysis, and presents case studies from the Mid Atlantic Highland region, with all but the bare minimum econometrics, statistics, and math excluded or relegated to an appendix. It is a non-market valu...
Effect of Impact Location on the Response of Shuttle Wing Leading Edge Panel 9
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Spellman, Regina L.; Hardy, Robin C.; Fasanella, Edwin L.; Jackson, Karen E.
2005-01-01
The objective of this paper is to compare the results of several simulations performed to determine the worst-case location for a foam impact on the Space Shuttle wing leading edge. The simulations were performed using the commercial non-linear transient dynamic finite element code, LS-DYNA. These simulations represent the first in a series of parametric studies performed to support the selection of the worst-case impact scenario. Panel 9 was selected for this study to enable comparisons with previous simulations performed during the Columbia Accident Investigation. The projectile for this study is a 5.5-in cube of typical external tank foam weighing 0.23 lb. Seven locations spanning the panel surface were impacted with the foam cube. For each of these cases, the foam was traveling at 1000 ft/s directly aft, along the orbiter X-axis. Results compared from the parametric studies included strains, contact forces, and material energies for various simulations. The results show that the worst case impact location was on the top surface, near the apex.
38th Annual Maintenance & Operations Cost Study for Schools
ERIC Educational Resources Information Center
Agron, Joe
2009-01-01
Despite the worst economic environment in generations, spending by K-12 institutions on maintenance and operations (M&O) held its own--defying historical trends that have shown M&O spending among the most affected in times of budget tightening. This article presents data from the 38th annual Maintenance & Operations Cost Study for…
Faith, Daniel P.
2015-01-01
The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, E. A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, Edward A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
Optimization of fixed-range trajectories for supersonic transport aircraft
NASA Astrophysics Data System (ADS)
Windhorst, Robert Dennis
1999-11-01
This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.
The ethics of administrative credentialing.
Jones, James W; McCullough, Laurence B; Crigger, Nancy A; Richman, Bruce W
2005-04-01
A vascular surgeon has practiced in the same community for more than 20 years, holding privileges at the two largest local general hospitals. She is widely respected and admired by patients and fellow physicians in all specialties, and her results are consistently good. Recently, the board of directors at the hospital that has been the source of 80% of her case referrals hired a notorious slash-and-burn management firm to improve the balance sheet. The new chief executive officer (CEO) installed an information technology system that can provide management with physician-specific figures on costs and reimbursements. The management consultants identified the 10% of physicians with the worst cost/reimbursement ratios over the preceding 5 years and persuaded the board of directors to order their clinical privileges withdrawn. Our seasoned surgeon learns that she is among the targeted group. Is there an ethical issue here, and, if so, how should she respond?
Maximum Power Point tracking charge controllers for telecom applications -- Analysis and economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wills, R.H.
Simple charge controllers connect photovoltaic modules directly to the battery bank resulting in a significant power loss if the battery bank voltage differs greatly from the PV Maximum Power Point (MPP) voltage. Recent modeling work at AES has shown that dc-dc converter type MPP tracking charge controllers can deliver more than 30% more energy from PV modules to the battery when the PV modules are cool and the battery state of charge is low--this is typically both the worst case condition (i.e., winter) and also the design condition that determines the PV array size. Economic modeling, based on typical telecommore » system installed costs shows benefits of more than $3/Wp for MPPT over conventional charge controllers in this application--a value that greatly exceeds the additional cost of the dc-dc converter.« less
Alfa, M J; Olson, N
2016-05-01
To determine which simulated-use test soils met the worst-case organic levels and viscosity of clinical secretions, and had the best adhesive characteristics. Levels of protein, carbohydrate and haemoglobin, and vibrational viscosity of clinical endoscope secretions were compared with test soils including ATS, ATS2015, Edinburgh, Edinburgh-M (modified), Miles, 10% serum and coagulated whole blood. ASTM D3359 was used for adhesion testing. Cleaning of a single-channel flexible intubation endoscope was tested after simulated use. The worst-case levels of protein, carbohydrate and haemoglobin, and viscosity of clinical material were 219,828μg/mL, 9296μg/mL, 9562μg/mL and 6cP, respectively. Whole blood, ATS2015 and Edinburgh-M were pipettable with viscosities of 3.4cP, 9.0cP and 11.9cP, respectively. ATS2015 and Edinburgh-M best matched the worst-case clinical parameters, but ATS had the best adhesion with 7% removal (36.7% for Edinburgh-M). Edinburgh-M and ATS2015 showed similar soiling and removal characteristics from the surface and lumen of a flexible intubation endoscope. Of the test soils evaluated, ATS2015 and Edinburgh-M were found to be good choices for the simulated use of endoscopes, as their composition and viscosity most closely matched worst-case clinical material. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 0.1 to 3.1 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour at the left abutment ranged from 10.4 to 12.5 ft with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 25.3 to 27.3 ft with the worst-case occurring at the incipient-overtopping discharge. The worst-case total scour also occurred at the incipient-overtopping discharge. The incipient-overtopping discharge was in between the 100- and 500-year discharges. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Social cost of leptospirosis cases attributed to the 2011 disaster striking Nova Friburgo, Brazil.
Pereira, Carlos; Barata, Martha; Trigo, Aline
2014-04-15
The aim of this study was to estimate the social cost of the leptospirosis cases that were attributed to the natural disaster of January 2011 in Nova Friburgo (State of Rio de Janeiro, Brazil) through a partial economic assessment. This study utilized secondary data supplied by the Municipal Health Foundation of Nova Friburgo. Income scenarios based on the national and state minimum wages and on average income of the local population were employed. The total social cost of leptospirosis cases attributed to the 2011 disaster may range between US$21,500 and US$66,000 for the lower income scenario and between US$23,900 and US$100,800 for that of higher income. Empirical therapy represented a total avoided cost of US$14,800, in addition to a reduction in lethality. An estimated 31 deaths were avoided among confirmed cases of the disease, and no deaths resulted from the leptospirosis cases attributed to the natural disaster. There has been a significant post-disaster rise in leptospirosis incidence in the municipality, which illustrates the potential for increased cases--and hence costs--of this illness following natural disasters, which justifies the adoption of preventive measures in environmental health.
A holistic framework for design of cost-effective minimum water utilization network.
Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N
2008-07-01
Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.
Hollis, Geoff
2018-04-01
Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
ERIC Educational Resources Information Center
Roberts, Albert R.; Schervish, Phillip
1988-01-01
National survey data on the cost-effectiveness and cost-benefits of 11 different juvenile offender treatment program types are reviewed. Best-worst-midpoint, threshold, and convergency analyses are applied to these studies. Meaningful ways of dealing with missing information when evaluating alternative juvenile justice policies and programs are…
Méry, Jacques; Bayer, Stefan
2005-12-01
Dry tomb and bioreactor landfills were analyzed with respect to their external costs in an intergenerational cost-benefit analysis in a partial framework which enabled a sounder comparison to be carried out between these two technologies from a socio-economic viewpoint. Obviously, this approach was only a first step for building a comprehensive basis of any environmental as well as fiscal policy in the field of waste management. All external costs are identified and evaluated in three different scenarios, corresponding to a worst case, a best guess and a best case. Obviously, discounting is crucial with respect to an intergenerational perspective. Generation-adjusted discounting (GAD) was applied to take into account equity as well as efficiency criteria, in order to deal with three different types of uncertainties that are decisive in waste policy decisions: a physical uncertainty is captured by introducing our three different scenarios; a macroeconomic uncertainty is taken into consideration by calculating present values using different real growth rates; and a microeconomic uncertainty is taken into account by considering individual peculiarities reflected in their subjective time preference rate. The findings show, that whenever there is a low real GDP growth of less than 1%, the bioreactor is generally superior to the dry tomb (lower present values of external costs). This statement becomes more valid as the growth rate decreases. However, whenever there are high positive growth rates, it is the dry tomb technology which is superior to the bioreactor system.
Investigation and evaluation of shuttle/GPS navigation system
NASA Technical Reports Server (NTRS)
Nilsen, P. W.
1977-01-01
Iterative procedures were used to analyze the performance of two preliminary shuttle/GPS navigation system configurations: an early OFT experimental system and a more sophisticated system which consolidates several separate navigation functions thus permitting net cost savings from decreased shuttle avionics weight and power consumption, and from reduced ground data processing. The GPS system can provide on-orbit navigation accuracy an order of magnitude better than the baseline system, with very adequate link margins. The worst-case link margin is 4.3 dB. This link margin accounts for shuttle RF circuit losses which were minimized under the constraints of program schedule and environmental limitations. Implicit in the link analyses are the location trade-offs for preamplifiers and antennas.
NASA Technical Reports Server (NTRS)
Simon, M. K.; Polydoros, A.
1981-01-01
This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.
Optimizing Teleportation Cost in Distributed Quantum Circuits
NASA Astrophysics Data System (ADS)
Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh
2018-03-01
The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.
2012-01-01
Background Our companion paper discussed the yield benefits achieved by integrating deacetylation, mechanical refining, and washing with low acid and low temperature pretreatment. To evaluate the impact of the modified process on the economic feasibility, a techno-economic analysis (TEA) was performed based on the experimental data presented in the companion paper. Results The cost benefits of dilute acid pretreatment technology combined with the process alternatives of deacetylation, mechanical refining, and pretreated solids washing were evaluated using cost benefit analysis within a conceptual modeling framework. Control cases were pretreated at much lower acid loadings and temperatures than used those in the NREL 2011 design case, resulting in much lower annual ethanol production. Therefore, the minimum ethanol selling prices (MESP) of the control cases were $0.41-$0.77 higher than the $2.15/gallon MESP of the design case. This increment is highly dependent on the carbohydrate content in the corn stover. However, if pretreatment was employed with either deacetylation or mechanical refining, the MESPs were reduced by $0.23-$0.30/gallon. Combing both steps could lower the MESP further by $0.44 ~ $0.54. Washing of the pretreated solids could also greatly improve the final ethanol yields. However, the large capital cost of the solid–liquid separation unit negatively influences the process economics. Finally, sensitivity analysis was performed to study the effect of the cost of the pretreatment reactor and the energy input for mechanical refining. A 50% cost reduction in the pretreatment reactor cost reduced the MESP of the entire conversion process by $0.11-$0.14/gallon, while a 10-fold increase in energy input for mechanical refining will increase the MESP by $0.07/gallon. Conclusion Deacetylation and mechanical refining process options combined with low acid, low severity pretreatments show improvements in ethanol yields and calculated MESP for cellulosic ethanol production. PMID:22967479
Evaluation of alternative future energy scenarios for Brazil using an energy mix model
NASA Astrophysics Data System (ADS)
Coelho, Maysa Joppert
The purpose of this study is to model and assess the performance and the emissions impacts of electric energy technologies in Brazil, based on selected economic scenarios, for a time frame of 40 years, taking the year of 1995 as a base year. A Base scenario has been developed, for each of three economic development projections, based upon a sectoral analysis. Data regarding the characteristics of over 300 end-use technologies and 400 energy conversion technologies have been collected. The stand-alone MARKAL technology-based energy-mix model, first developed at Brookhaven National Laboratory, was applied to a base case study and five alternative case studies, for each economic scenario. The alternative case studies are: (1) minimum increase in the thermoelectric contribution to the power production system of 20 percent after 2010; (2) extreme values for crude oil price; (3) minimum increase in the renewable technologies contribution to the power production system of 20 percent after 2010; (4) uncertainty on the cost of future renewable conversion technologies; and (5) model is forced to use the natural gas plants committed to be built in the country. Results such as the distribution of fuel used for power generation, electricity demand across economy sectors, total CO2 emissions from burning fossil fuels for power generation, shadow price (marginal cost) of technologies, and others, are evaluated and compared to the Base scenarios previous established. Among some key findings regarding the Brazilian energy system it may be inferred that: (1) diesel technologies are estimated to be the most cost-effective thermal technology in the country; (2) wind technology is estimated to be the most cost-effective technology to be used when a minimum share of renewables is imposed to the system; and (3) hydroelectric technologies present the highest cost/benefit relation among all conversion technologies considered. These results are subject to the limitations of key input assumptions and key assumptions of modeling framework, and are used as the basis for recommendations regarding energy development priorities for Brazil.
Berger, Karin; Schopohl, Dorothee; Rieger, Christina; Ostermann, Helmut
2015-12-01
Busulfan (BU) used as cytoreductive conditioning prior to hematopoietic stem cell transplantation (HSCT) is available as intravenous (IV) and oral (O) preparation. IV-BU has clinical advantages associated with relevant incremental costs. The aim was to determine the economic impact of IV-BU versus O-BU in adult HSCT recipients from a German health care providers' perspective. A budget-impact model (BIM) including costs and risks for oral mucositis (OM), infection with OM, and hepatic sinusoidal obstruction syndrome (SOS) was developed. Model inputs are literature data comparing clinical effects of IV-BU versus O-BU and German cost data (conditioning therapy, treatment of OM, infections, SOS without/with multiorgan failure) from literature and tariff lists. Base case calculations resulted the following: total costs of adverse events were €86,434 with O-BU and €44,376 with IV-BU for ten patients each. Considering costs of adverse events and drugs, about €5840 for ten patients receiving IV-BU are saved. Sensitivity analyses were conducted in several ways. Cost savings range between €4910 and €12,640 per ten patients for all adverse events and €2070 or €1140 per ten patients considering SOS only. Drug treatment of SOS and treatment of multiorgan failure during severe SOS are major cost drivers. Worst case scenario calculations (assuming -25% risk of all adverse events for O-BU and +25% for IV-BU) yield up to €27,570 per ten patients with IV-BU. Considering costs of adverse events and drugs, IV-BU is the dominant alternative from a German providers' perspective. For more comprehensive economic evaluations, additional epidemiological data, evidence on clinical outcomes, patient-reported outcomes, and treatment patterns are needed.
Optimal Analyses for 3×n AB Games in the Worst Case
NASA Astrophysics Data System (ADS)
Huang, Li-Te; Lin, Shun-Shii
The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.
Benefit-cost assessment programs: Costa Rica case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, A.L.; Trocki, L.K.
An assessment of mineral potential, in terms of types and numbers of deposits, approximate location and associated tonnage and grades, is a valuable input to a nation's economic planning and mineral policy development. This study provides a methodology for applying benefit-cost analysis to mineral resource assessment programs, both to determine the cost effectiveness of resource assessments and to ascertain future benefits to the nation. In a case study of Costa Rica, the benefit-cost ratio of a resource assessment program was computed to be a minimum of 4:1 ($10.6 million to $2.5 million), not including the economic benefits accuring from themore » creation of 800 mining sector and 1,200 support services jobs. The benefit-cost ratio would be considerably higher if presently proposed revisions of mineral policy were implemented and benefits could be defined for Costa Rica.« less
A comparison of fitness-case sampling methods for genetic programming
NASA Astrophysics Data System (ADS)
Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel
2017-11-01
Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.
Takaseya, Tohru; Fumoto, Hideyuki; Shiose, Akira; Arakawa, Yoko; Rao, Santosh; Horvath, David J; Massiello, Alex L; Mielke, Nicole; Chen, Ji-Feng; Zhou, Qun; Dessoffy, Raymond; Kramer, Larry; Benefit, Stephen; Golding, Leonard A R; Fukamachi, Kiyotaka
2010-12-01
The purpose of this study was to evaluate in vivo the biocompatibility of BioMedFlex (BMF), a new resilient, hard-carbon, thin-film coating, as a blood journal bearing material in Cleveland Heart's (Charlotte, NC, USA) continuous-flow right and left ventricular assist devices (RVADs and LVADs). BMF was applied to RVAD rotating assemblies or both rotating and stator assemblies in three chronic bovine studies. In one case, an LVAD with a BMF-coated stator was also implanted. Cases 1 and 3 were electively terminated at 18 and 29 days, respectively, with average measured pump flows of 4.9 L/min (RVAD) in Case 1 and 5.7 L/min (RVAD) plus 5.7 L/min (LVAD) in Case 3. Case 2 was terminated prematurely after 9 days because of sepsis. The sepsis, combined with running the pump at minimum speed (2000 rpm), presented a worst-case biocompatibility challenge. Postexplant evaluation of the blood-contacting journal bearing surfaces showed no biologic deposition in any of the four pumps. Thrombus inside the RVAD inlet cannula in Case 3 is believed to be the origin of a nonadherent thrombus wrapped around one of the primary impeller blades. In conclusion, we demonstrated that BMF coatings can provide good biocompatibility in the journal bearing for ventricular assist devices. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2003-01-01
In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.
Vogt, Florian M; Hunold, Peter; Haegele, Julian; Stahlberg, Erik; Barkhausen, Jörg; Goltz, Jan Peter
2018-04-01
Calculation of process-orientated costs for inpatient endovascular treatment of peripheral artery disease (PAD) from an interventional radiology (IR) perspective. Comparison of revenue situations in consideration of different ways to calculate internal treatment charges (ITCs) and diagnosis-related groups (DRG) for an independent IR department. Costs (personnel, operating, material, and indirect costs) for endovascular treatment of PAD patients in an inpatient setting were calculated on a full cost basis. These costs were compared to the revenue situation for IR for five different scenarios: 1) IR receives the total DRG amount. IR receives the following DRG shares using ITCs based on InEK shares for 2) "Radiology" cost center type, 3) "OP" cost center type, 4) "Radiology" and "OP" cost center type, and 5) based on DKG-NT (scale of charges of the German Hospital Society). 78 patients (mean age: 68.6 ± 11.4y) with the following DRGs were evaluated: F59A (n = 6), F59B (n = 14), F59C (n = 20) and F59 D (n = 38). The length of stay for these DRG groups was 15.8 ± 12.1, 9.4 ± 7.8, 2.8 ± 3.7 and 3.4 ± 6.5 days Material costs represented the bulk of all costs, especially if new and complex endovascular procedures were performed. Revenues for neither InEK shares nor ITCs based on DKG-NT were high enough to cover material costs. Contribution margins for the five scenarios were 1 = € 1,539.29, 2 = € -1,775.31, 3 = € -2,579.41, 4 = € -963.43, 5 = € -2,687.22 in F59A, 1 = € -792.67, 2 = € -2,685.00, 3 = € -2,600.81, 4 = € -1,618.94, 5 = € -3,060.03 in F59B, 1 = € -879.87, 2 = € -2,633.14, 3 = € -3,001.07, 4 = € -1,952.33, 5 = € -3,136.24 in F59C and 1 = € 703.65, 2 = € -106.35, 3 = € -773.86, 4 = € 205.14, 5 = € -647.22 in F59 D. InEK shares return on average € 150 - 500 more than ITCs based on the DKG-NT catalog. In this study positive contribution margins were seen only if IR receives the complete DRG amount. InEK shares do not cover incurred costs, with material costs representing the main part of treatment costs. Internal treatment charges based on the DKG-NT catalog provide the worst cost coverage. · Internal treatment charges based on the DKG-NT catalog provide the worst cost coverage for interventional radiology at our university hospital.. · Shares from the InEK matrix such as the cost center "radiology" or "OP" as revenue for IR are not sufficient to cover incurred costs. A positive contribution margin is achieved only in the case of a compensation method in which IR receives the total DRG amount.. · Vogt FM, Hunold P, Haegele J et al. Comparison of the Revenue Situation in Interventional Radiology Based on the Example of Peripheral Artery Disease in the Case of a DRG Payment System and Various Internal Treatment Charges. Fortschr Röntgenstr 2017; 190: 348 - 357. © Georg Thieme Verlag KG Stuttgart · New York.
How can health systems research reach the worst-off? A conceptual exploration.
Pratt, Bridget; Hyder, Adnan A
2016-11-15
Health systems research is increasingly being conducted in low and middle-income countries (LMICs). Such research should aim to reduce health disparities between and within countries as a matter of global justice. For such research to do so, ethical guidance that is consistent with egalitarian theories of social justice proposes it ought to (amongst other things) focus on worst-off countries and research populations. Yet who constitutes the worst-off is not well-defined. By applying existing work on disadvantage from political philosophy, the paper demonstrates that (at least) two options exist for how to define the worst-off upon whom equity-oriented health systems research should focus: those who are worst-off in terms of health or those who are systematically disadvantaged. The paper describes in detail how both concepts can be understood and what metrics can be relied upon to identify worst-off countries and research populations at the sub-national level (groups, communities). To demonstrate how each can be used, the paper considers two real-world cases of health systems research and whether their choice of country (Uganda, India) and research population in 2011 would have been classified as amongst the worst-off according to the proposed concepts. The two proposed concepts can classify different countries and sub-national populations as worst-off. It is recommended that health researchers (or other actors) should use the concept that best reflects their moral commitments-namely, to perform research focused on reducing health inequalities or systematic disadvantage more broadly. If addressing the latter, it is recommended that they rely on the multidimensional poverty approach rather than the income approach to identify worst-off populations.
Incremental cost of postacute care in nursing homes.
Spector, William D; Limcangco, Maria Rhona; Ladd, Heather; Mukamel, Dana
2011-02-01
To determine whether the case mix index (CMI) based on the 53-Resource Utilization Groups (RUGs) captures all the cross-sectional variation in nursing home (NH) costs or whether NHs that have a higher percent of Medicare skilled care days (%SKILLED) have additional costs. DATA AND SAMPLE: Nine hundred and eighty-eight NHs in California in 2005. Data are from Medicaid cost reports, the Minimum Data Set, and the Economic Census. We estimate hybrid cost functions, which include in addition to outputs, case mix, ownership, wages, and %SKILLED. Two-stage least-square (2SLS) analysis was used to deal with the potential endogeneity of %SKILLED and CMI. On average 11 percent of NHs days were due to skilled care. Based on the 2SLS model, %SKILLED is associated with costs even when controlling for CMI. The marginal cost of a one percentage point increase in %SKILLED is estimated at U.S.$70,474 or about 1.2 percent of annual costs for the average cost facility. Subanalyses show that the increase in costs is mainly due to additional expenses for nontherapy ancillaries and rehabilitation. The 53-RUGs case mix does not account completely for all the variation in actual costs of care for postacute patients in NHs. © Health Research and Educational Trust.
Incremental Cost of Postacute Care in Nursing Homes
Spector, William D; Limcangco, Maria Rhona; Ladd, Heather; Mukamel, Dana A
2011-01-01
Objectives To determine whether the case mix index (CMI) based on the 53-Resource Utilization Groups (RUGs) captures all the cross-sectional variation in nursing home (NH) costs or whether NHs that have a higher percent of Medicare skilled care days (%SKILLED) have additional costs. Data and Sample Nine hundred and eighty-eight NHs in California in 2005. Data are from Medicaid cost reports, the Minimum Data Set, and the Economic Census. Research Design We estimate hybrid cost functions, which include in addition to outputs, case mix, ownership, wages, and %SKILLED. Two-stage least-square (2SLS) analysis was used to deal with the potential endogeneity of %SKILLED and CMI. Results On average 11 percent of NHs days were due to skilled care. Based on the 2SLS model, %SKILLED is associated with costs even when controlling for CMI. The marginal cost of a one percentage point increase in %SKILLED is estimated at U.S.$70,474 or about 1.2 percent of annual costs for the average cost facility. Subanalyses show that the increase in costs is mainly due to additional expenses for nontherapy ancillaries and rehabilitation. Conclusion The 53-RUGs case mix does not account completely for all the variation in actual costs of care for postacute patients in NHs. PMID:21029085
Faith, Daniel P
2015-02-19
The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.
Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133
Eliciting older people's preferences for exercise programs: a best-worst scaling choice experiment.
Franco, Marcia R; Howard, Kirsten; Sherrington, Catherine; Ferreira, Paulo H; Rose, John; Gomes, Juliana L; Ferreira, Manuela L
2015-01-01
What relative value do older people with a previous fall or mobility-related disability attach to different attributes of exercise? Prospective, best-worst scaling study. Two hundred and twenty community-dwelling people, aged 60 years or older, who presented with a previous fall or mobility-related disability. Online or face-to-face questionnaire. Utility values for different exercise attributes and levels. The utility levels were calculated by asking participants to select the attribute that they considered to be the best (ie, they were most likely to want to participate in programs with this attribute) and worst (ie, least likely to want to participate). The attributes included were: exercise type; time spent on exercise per day; frequency; transport type; travel time; out-of-pocket costs; reduction in the chance of falling; and improvement in the ability to undertake tasks inside and outside of home. The attributes of exercise programs with the highest utility values were: home-based exercise and no need to use transport, followed by an improvement of 60% in the ability to do daily tasks at home, no costs, and decreasing the chances of falling to 0%. The attributes with the lowest utility were travel time of 30 minutes or more and out-of-pocket costs of AUD50 per session. The type of exercise, travel time and costs are more highly valued by older people than the health benefits. These findings suggest that physical activity engagement strategies need to go beyond education about health benefits and focus on improving accessibility to exercise programs. Exercise that can be undertaken at or close to home without any cost is most likely to be taken up by older people with past falls and/or mobility-related disability. Copyright © 2014 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Fisk, William J.
2009-07-08
Demand controlled ventilation (DCV) was evaluated for general office spaces in California. A medium size office building meeting the prescriptive requirements of the 2008 California building energy efficiency standards (CEC 2008) was assumed in the building energy simulations performed with the EnergyPlus program to calculate the DCV energy savings potential in five typical California climates. Three design occupancy densities and two minimum ventilation rates were used as model inputs to cover a broader range of design variations. The assumed values of minimum ventilation rates in offices without DCV, based on two different measurement methods, were 81 and 28 cfm per occupant. These rates are based on the co-author's unpublished analyses of data from EPA's survey of 100 U.S. office buildings. These minimum ventilation rates exceed the 15 to 20 cfm per person required in most ventilation standards for offices. The cost effectiveness of applying DCV in general office spaces was estimated via a life cycle cost analyses that considered system costs and energy cost reductions. The results of the energy modeling indicate that the energy savings potential of DCV is largest in the desert area of California (climate zone 14), followed by Mountains (climate zone 16), Central Valley (climate zone 12), North Coast (climate zone 3), and South Coast (climate zone 6). The results of the life cycle cost analysis show DCV is cost effective for office spaces if the typical minimum ventilation rates without DCV is 81 cfm per person, except at the low design occupancy of 10 people per 1000 ft{sup 2} in climate zones 3 and 6. At the low design occupancy of 10 people per 1000 ft{sup 2}, the greatest DCV life cycle cost savings is a net present value (NPV) ofmore » $$0.52/ft{sup 2} in climate zone 14, followed by $$0.32/ft{sup 2} in climate zone 16 and $$0.19/ft{sup 2} in climate zone 12. At the medium design occupancy of 15 people per 1000 ft{sup 2}, the DCV savings are higher with a NPV $$0.93/ft{sup 2} in climate zone 14, followed by $$0.55/ft{sup 2} in climate zone 16, $$0.46/ft{sup 2} in climate zone 12, $$0.30/ft{sup 2} in climate zone 3, $$0.16/ft{sup 2} in climate zone 3. At the high design occupancy of 20 people per 1000 ft{sup 2}, the DCV savings are even higher with a NPV $$1.37/ft{sup 2} in climate zone 14, followed by $$0.86/ft{sup 2} in climate zone 16, $$0.84/ft{sup 2} in climate zone 3, $$0.82/ft{sup 2} in climate zone 12, and $0.65/ft{sup 2} in climate zone 6. DCV was not found to be cost effective if the typical minimum ventilation rate without DCV is 28 cfm per occupant, except at high design occupancy of 20 people per 1000 ft{sup 2} in climate zones 14 and 16. Until the large uncertainties about the base case ventilation rates in offices without DCV are reduced, the case for requiring DCV in general office spaces will be a weak case.« less
Optimizing conceptual aircraft designs for minimum life cycle cost
NASA Technical Reports Server (NTRS)
Johnson, Vicki S.
1989-01-01
A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.
7 CFR 701.10 - Qualifying minimum cost of restoration.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Qualifying minimum cost of restoration. 701.10 Section..., DEPARTMENT OF AGRICULTURE AGRICULTURAL CONSERVATION PROGRAM EMERGENCY CONSERVATION PROGRAM AND CERTAIN RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.10 Qualifying minimum cost of restoration...
Kuijpers, Laura Maria Francisca; Maltha, Jessica; Guiraud, Issa; Kaboré, Bérenger; Lompo, Palpouguini; Devlieger, Hugo; Van Geet, Chris; Tinto, Halidou; Jacobs, Jan
2016-06-02
Plasmodium falciparum infection may cause severe anaemia, particularly in children. When planning a diagnostic study on children suspected of severe malaria in sub-Saharan Africa, it was questioned how much blood could be safely sampled; intended blood volumes (blood cultures and EDTA blood) were 6 mL (children aged <6 years) and 10 mL (6-12 years). A previous review [Bull World Health Organ. 89: 46-53. 2011] recommended not to exceed 3.8 % of total blood volume (TBV). In a simulation exercise using data of children previously enrolled in a study about severe malaria and bacteraemia in Burkina Faso, the impact of this 3.8 % safety guideline was evaluated. For a total of 666 children aged >2 months to <12 years, data of age, weight and haemoglobin value (Hb) were available. For each child, the estimated TBV (TBVe) (mL) was calculated by multiplying the body weight (kg) by the factor 80 (ml/kg). Next, TBVe was corrected for the degree of anaemia to obtain the functional TBV (TBVf). The correction factor consisted of the rate 'Hb of the child divided by the reference Hb'; both the lowest ('best case') and highest ('worst case') reference Hb values were used. Next, the exact volume that a 3.8 % proportion of this TBVf would present was calculated and this volume was compared to the blood volumes that were intended to be sampled. When applied to the Burkina Faso cohort, the simulation exercise pointed out that in 5.3 % (best case) and 11.4 % (worst case) of children the blood volume intended to be sampled would exceed the volume as defined by the 3.8 % safety guideline. Highest proportions would be in the age groups 2-6 months (19.0 %; worst scenario) and 6 months-2 years (15.7 %; worst case scenario). A positive rapid diagnostic test for P. falciparum was associated with an increased risk of violating the safety guideline in the worst case scenario (p = 0.016). Blood sampling in children for research in P. falciparum endemic settings may easily violate the proposed safety guideline when applied to TBVf. Ethical committees and researchers should be wary of this and take appropriate precautions.
Discussions On Worst-Case Test Condition For Single Event Burnout
NASA Astrophysics Data System (ADS)
Liu, Sandra; Zafrani, Max; Sherman, Phillip
2011-10-01
This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.
Auerbach, H; Schreyögg, J; Busse, R
2006-01-01
The purpose of this study is to assess the cost-effectiveness (net costs per life year gained) of telemedical devices for pre-clinical traffic accident emergency rescue in Germany. Two equipment versions of a telemedical device are compared from a societal perspective with the baseline in Germany, i.e. the non-application of telemedicine in emergency rescues. The analysis is based on retrospective statistical data covering a period of 10 years with discounted costs not adjusted for inflation. Due to the uncertainty of data, certain assumptions and estimates were necessary. The outcome is measured in terms of "life years gained" by reducing therapy-free intervals and improvements in first-aid provided by laypersons. The introduction of the basic equipment version, "Automatic Accident Alert", is associated with net costs per life year gained of euro 247,977 (at baseline assumptions). The full equipment version of the telemedical device would lead to estimated net costs of euro 239,524 per life year gained. Multi-way sensitivity-analysis with best and worst case scenarios suggests that decreasing system costs would disproportionately reduce total costs, and that rapid market penetration would largely increase the system's benefit, while simultaneously reducing costs. The net costs per life year gained in the application of the two versions of the telemedical device for pre-clinical emergency rescue of traffic accidents are estimated as quite high. However, the implementation of the device as part of a larger European co-ordinated initiative is more realistic.
Bartnicki, Jerzy; Amundsen, Ingar; Brown, Justin; Hosseini, Ali; Hov, Øystein; Haakenstad, Hilde; Klein, Heiko; Lind, Ole Christian; Salbu, Brit; Szacinski Wendel, Cato C; Ytre-Eide, Martin Album
2016-01-01
The Russian nuclear submarine K-27 suffered a loss of coolant accident in 1968 and with nuclear fuel in both reactors it was scuttled in 1981 in the outer part of Stepovogo Bay located on the eastern coast of Novaya Zemlya. The inventory of spent nuclear fuel on board the submarine is of concern because it represents a potential source of radioactive contamination of the Kara Sea and a criticality accident with potential for long-range atmospheric transport of radioactive particles cannot be ruled out. To address these concerns and to provide a better basis for evaluating possible radiological impacts of potential releases in case a salvage operation is initiated, we assessed the atmospheric transport of radionuclides and deposition in Norway from a hypothetical criticality accident on board the K-27. To achieve this, a long term (33 years) meteorological database has been prepared and used for selection of the worst case meteorological scenarios for each of three selected locations of the potential accident. Next, the dispersion model SNAP was run with the source term for the worst-case accident scenario and selected meteorological scenarios. The results showed predictions to be very sensitive to the estimation of the source term for the worst-case accident and especially to the sizes and densities of released radioactive particles. The results indicated that a large area of Norway could be affected, but that the deposition in Northern Norway would be considerably higher than in other areas of the country. The simulations showed that deposition from the worst-case scenario of a hypothetical K-27 accident would be at least two orders of magnitude lower than the deposition observed in Norway following the Chernobyl accident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Alpha Collaboration; Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-04-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5% worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime. PMID:23653197
Aging of Weapon Seals – An Update on Butyl O-ring Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Mark H.
2011-07-13
During testing under the Enhanced Surveillance Campaign in 2001, preliminary data detected a previously unknown and potentially serious concern with recently procured butyl o-rings on several programs. All butyl o-rings molded from a proprietary formulation throughout the period circa 1999 through 2001 had less than a full cure. Engineering judgment was that under curing is detrimental and could possibly lead to sub-optimum performance or, in the worst case, premature seal failure. An aging study was undertaken to ensure that suspect o-rings installed in the stockpile will retain sufficient sealing force for a minimum ten-year service life. A new prediction modelmore » developed for this study indicates suspect o-rings do not need to be replaced before the ten-year service life. Long-term testing results are reported on a yearly basis to validate the prediction model. This report documents the aging results for the period September 2002 to January 2011.« less
Effect of censoring trace-level water-quality data on trend-detection capability
Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.
1984-01-01
Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.
Charman, A E; Amole, C; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Butler, E; Capra, A; Cesar, C L; Charlton, M; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Isaac, C A; Jonsell, S; Kurchaninov, L; Little, A; Madsen, N; McKenna, J T K; Menary, S; Napoli, S C; Nolan, P; Olin, A; Pusa, P; Rasmussen, C Ø; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Thompson, R I; van der Werf, D P; Wurtele, J S; Zhmoginov, A I
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Khor, Joo Moy; Tizzard, Andrew; Demosthenous, Andreas; Bayford, Richard
2014-06-01
Electrical impedance tomography (EIT) could be significantly advantageous to continuous monitoring of lung development in newborn and, in particular, preterm infants as it is non-invasive and safe to use within the intensive care unit. It has been demonstrated that accurate boundary form of the forward model is important to minimize artefacts in reconstructed electrical impedance images. This paper presents the outcomes of initial investigations for acquiring patient-specific thorax boundary information using a network of flexible sensors that imposes no restrictions on the patient's normal breathing and movements. The investigations include: (1) description of the basis of the reconstruction algorithms, (2) tests to determine a minimum number of bend sensors, (3) validation of two approaches to reconstruction and (4) an example of a commercially available bend sensor and its performance. Simulation results using ideal sensors show that, in the worst case, a total shape error of less than 6% with respect to its total perimeter can be achieved.
Practical Algorithms for the Longest Common Extension Problem
NASA Astrophysics Data System (ADS)
Ilie, Lucian; Tinta, Liviu
The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.
Of possible cheminformatics futures.
Oprea, Tudor I; Taboureau, Olivier; Bologa, Cristian G
2012-01-01
For over a decade, cheminformatics has contributed to a wide array of scientific tasks from analytical chemistry and biochemistry to pharmacology and drug discovery; and although its contributions to decision making are recognized, the challenge is how it would contribute to faster development of novel, better products. Here we address the future of cheminformatics with primary focus on innovation. Cheminformatics developers often need to choose between "mainstream" (i.e., accepted, expected) and novel, leading-edge tools, with an increasing trend for open science. Possible futures for cheminformatics include the worst case scenario (lack of funding, no creative usage), as well as the best case scenario (complete integration, from systems biology to virtual physiology). As "-omics" technologies advance, and computer hardware improves, compounds will no longer be profiled at the molecular level, but also in terms of genetic and clinical effects. Among potentially novel tools, we anticipate machine learning models based on free text processing, an increased performance in environmental cheminformatics, significant decision-making support, as well as the emergence of robot scientists conducting automated drug discovery research. Furthermore, cheminformatics is anticipated to expand the frontiers of knowledge and evolve in an open-ended, extensible manner, allowing us to explore multiple research scenarios in order to avoid epistemological "local information minimum trap".
A search game model of the scatter hoarder's problem
Alpern, Steve; Fokkink, Robbert; Lidbetter, Thomas; Clayton, Nicola S.
2012-01-01
Scatter hoarders are animals (e.g. squirrels) who cache food (nuts) over a number of sites for later collection. A certain minimum amount of food must be recovered, possibly after pilfering by another animal, in order to survive the winter. An optimal caching strategy is one that maximizes the survival probability, given worst case behaviour of the pilferer. We modify certain ‘accumulation games’ studied by Kikuta & Ruckle (2000 J. Optim. Theory Appl.) and Kikuta & Ruckle (2001 Naval Res. Logist.), which modelled the problem of optimal diversification of resources against catastrophic loss, to include the depth at which the food is hidden at each caching site. Optimal caching strategies can then be determined as equilibria in a new ‘caching game’. We show how the distribution of food over sites and the site-depths of the optimal caching varies with the animal's survival requirements and the amount of pilfering. We show that in some cases, ‘decoy nuts’ are required to be placed above other nuts that are buried further down at the same site. Methods from the field of search games are used. Some empirically observed behaviour can be shown to be optimal in our model. PMID:22012971
DOE Office of Scientific and Technical Information (OSTI.GOV)
Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl; Kooy, Hanne M.; Heijmen, Ben J.M.
2015-06-01
Purpose: To shorten delivery times of intensity modulated proton therapy by reducing the number of energy layers in the treatment plan. Methods and Materials: We have developed an energy layer reduction method, which was implemented into our in-house-developed multicriteria treatment planning system “Erasmus-iCycle.” The method consisted of 2 components: (1) minimizing the logarithm of the total spot weight per energy layer; and (2) iteratively excluding low-weighted energy layers. The method was benchmarked by comparing a robust “time-efficient plan” (with energy layer reduction) with a robust “standard clinical plan” (without energy layer reduction) for 5 oropharyngeal cases and 5 prostate cases.more » Both plans of each patient had equal robust plan quality, because the worst-case dose parameters of the standard clinical plan were used as dose constraints for the time-efficient plan. Worst-case robust optimization was performed, accounting for setup errors of 3 mm and range errors of 3% + 1 mm. We evaluated the number of energy layers and the expected delivery time per fraction, assuming 30 seconds per beam direction, 10 ms per spot, and 400 Giga-protons per minute. The energy switching time was varied from 0.1 to 5 seconds. Results: The number of energy layers was on average reduced by 45% (range, 30%-56%) for the oropharyngeal cases and by 28% (range, 25%-32%) for the prostate cases. When assuming 1, 2, or 5 seconds energy switching time, the average delivery time was shortened from 3.9 to 3.0 minutes (25%), 6.0 to 4.2 minutes (32%), or 12.3 to 7.7 minutes (38%) for the oropharyngeal cases, and from 3.4 to 2.9 minutes (16%), 5.2 to 4.2 minutes (20%), or 10.6 to 8.0 minutes (24%) for the prostate cases. Conclusions: Delivery times of intensity modulated proton therapy can be reduced substantially without compromising robust plan quality. Shorter delivery times are likely to reduce treatment uncertainties and costs.« less
Busch, Martin H J; Vollmann, Wolfgang; Grönemeyer, Dietrich H W
2006-05-26
Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach (1/4) of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q V(ind) < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q V(ind) > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.
Busch, Martin HJ; Vollmann, Wolfgang; Grönemeyer, Dietrich HW
2006-01-01
Background Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach ¼ of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. Methods First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. Results The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. Conclusion The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q Vind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for. PMID:16729878
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Jamaluddin, A.; Jauhari, W. A.; Saputro, D. R. S.
2017-06-01
In this study, we consider a stochastic integrated manufacturer-retailer inventory model with service level constraint. The model analyzed in this article considers the situation in which the vendor and the buyer establish a long-term contract and strategic partnership to jointly determine the best strategy. The lead time and setup cost are assumed can be controlled by an additional crashing cost and an investment, respectively. It is assumed that shortages are allowed and partially backlogged on the buyer’s side, and that the protection interval (i.e., review period plus lead time) demand distribution is unknown but has given finite first and second moments. The objective is to apply the minmax distribution free approach to simultaneously optimize the review period, the lead time, the setup cost, the safety factor, and the number of deliveries in order to minimize the joint total expected annual cost. The service level constraint guarantees that the service level requirement can be satisfied at the worst case. By constructing Lagrange function, the analysis regarding the solution procedure is conducted, and a solution algorithm is then developed. Moreover, a numerical example and sensitivity analysis are given to illustrate the proposed model and to provide some observations and managerial implications.
Improved minimum cost and maximum power two stage genome-wide association study designs.
Stanhope, Stephen A; Skol, Andrew D
2012-01-01
In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.
NASA Technical Reports Server (NTRS)
Chapman, D. K.; Brown, A. H.
1979-01-01
The importance of temperature control to HEFLEX, a Spacelab experiment designed to measure kinetic properties of Helianthis nutation in a low-g environment, is discussed. It is argued that the development of the HEFLEX experiment has been severely hampered by the inadequate control of ambient air temperature provided by the spacecraft module design. A worst case calculation shows that delivery of only 69% of the maximum yield of useful data from the HEFLEX system is guaranteed; significant data losses from inadequate temperature control are expected. The magnitude of the expected data losses indicates that the cost reductions associated with imprecise temperature controls may prove to be a false economy in the long term.
NASA Astrophysics Data System (ADS)
Leal-Junior, Arnaldo G.; Vargas-Valencia, Laura; dos Santos, Wilian M.; Schneider, Felipe B. A.; Siqueira, Adriano A. G.; Pontes, Maria José; Frizera, Anselmo
2018-07-01
This paper presents a low cost and highly reliable system for angle measurement based on a sensor fusion between inertial and fiber optic sensors. The system consists of the sensor fusion through Kalman filter of two inertial measurement units (IMUs) and an intensity variation-based polymer optical fiber (POF) curvature sensor. In addition, the IMU was applied as a reference for a compensation technique of POF curvature sensor hysteresis. The proposed system was applied on the knee angle measurement of a lower limb exoskeleton in flexion/extension cycles and in gait analysis. Results show the accuracy of the system, where the Root Mean Square Error (RMSE) between the POF-IMU sensor system and the encoder was below 4° in the worst case and about 1° in the best case. Then, the POF-IMU sensor system was evaluated as a wearable sensor for knee joint angle assessment without the exoskeleton, where its suitability for this purpose was demonstrated. The results obtained in this paper pave the way for future applications of sensor fusion between electronic and fiber optic sensors in movement analysis.
Jow, Uei-Ming; Ghovanloo, Maysam
2012-12-21
We present a design methodology for an overlapping hexagonal planar spiral coil (hex-PSC) array, optimized for creation of a homogenous magnetic field for wireless power transmission to randomly moving objects. The modular hex-PSC array has been implemented in the form of three parallel conductive layers, for which an iterative optimization procedure defines the PSC geometries. Since the overlapping hex-PSCs in different layers have different characteristics, the worst case coil-coupling condition should be designed to provide the maximum power transfer efficiency (PTE) in order to minimize the spatial received power fluctuations. In the worst case, the transmitter (Tx) hex-PSC is overlapped by six PSCs and surrounded by six other adjacent PSCs. Using a receiver (Rx) coil, 20 mm in radius, at the coupling distance of 78 mm and maximum lateral misalignment of 49.1 mm (1/√3 of the PSC radius) we can receive power at a PTE of 19.6% from the worst case PSC. Furthermore, we have studied the effects of Rx coil tilting and concluded that the PTE degrades significantly when θ > 60°. Solutions are: 1) activating two adjacent overlapping hex-PSCs simultaneously with out-of-phase excitations to create horizontal magnetic flux and 2) inclusion of a small energy storage element in the Rx module to maintain power in the worst case scenarios. In order to verify the proposed design methodology, we have developed the EnerCage system, which aims to power up biological instruments attached to or implanted in freely behaving small animal subjects' bodies in long-term electrophysiology experiments within large experimental arenas.
Failed State 2030: Nigeria - A Case Study
2011-02-01
disastrous ecological conditions in its Niger Delta region, and is fighting one of the modern world?s worst legacies of political and economic corruption. A ...world’s worst legacies of political and economic corruption. A nation with more than 350 ethnic groups, 250 languages, and three distinct religious...happening in the world. The discus- sion herein is a mix of cultural sociology, political science, econom - ics, military science (sometimes called
Social Cost of Leptospirosis Cases Attributed to the 2011 Disaster Striking Nova Friburgo, Brazil
Pereira, Carlos; Barata, Martha; Trigo, Aline
2014-01-01
The aim of this study was to estimate the social cost of the leptospirosis cases that were attributed to the natural disaster of January 2011 in Nova Friburgo (State of Rio de Janeiro, Brazil) through a partial economic assessment. This study utilized secondary data supplied by the Municipal Health Foundation of Nova Friburgo. Income scenarios based on the national and state minimum wages and on average income of the local population were employed. The total social cost of leptospirosis cases attributed to the 2011 disaster may range between US$21,500 and US$66,000 for the lower income scenario and between US$23,900 and US$100,800 for that of higher income. Empirical therapy represented a total avoided cost of US$14,800, in addition to a reduction in lethality. An estimated 31 deaths were avoided among confirmed cases of the disease, and no deaths resulted from the leptospirosis cases attributed to the natural disaster. There has been a significant post-disaster rise in leptospirosis incidence in the municipality, which illustrates the potential for increased cases—and hence costs—of this illness following natural disasters, which justifies the adoption of preventive measures in environmental health. PMID:24739767
NASA Technical Reports Server (NTRS)
Avila, Arturo
2011-01-01
The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.
Minimum-Cost Aircraft Descent Trajectories with a Constrained Altitude Profile
NASA Technical Reports Server (NTRS)
Wu, Minghong G.; Sadovsky, Alexander V.
2015-01-01
An analytical formula for solving the speed profile that accrues minimum cost during an aircraft descent with a constrained altitude profile is derived. The optimal speed profile first reaches a certain speed, called the minimum-cost speed, as quickly as possible using an appropriate extreme value of thrust. The speed profile then stays on the minimum-cost speed as long as possible, before switching to an extreme value of thrust for the rest of the descent. The formula is applied to an actual arrival route and its sensitivity to winds and airlines' business objectives is analyzed.
An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel
NASA Technical Reports Server (NTRS)
Tripp, J. S.
1981-01-01
An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.
An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router
NASA Astrophysics Data System (ADS)
Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua
2016-10-01
Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.
NASA Technical Reports Server (NTRS)
Holladay, Jon; Day, Greg; Gill, Larry
2004-01-01
Spacecraft are typically designed with a primary focus on weight in order to meet launch vehicle performance parameters. However, for pressurized and/or man-rated spacecraft, it is also necessary to have an understanding of the vehicle operating environments to properly size the pressure vessel. Proper sizing of the pressure vessel requires an understanding of the space vehicle's life cycle and compares the physical design optimization (weight and launch "cost") to downstream operational complexity and total life cycle cost. This paper will provide an overview of some major environmental design drivers and provide examples for calculating the optimal design pressure versus a selected set of design parameters related to thermal and environmental perspectives. In addition, this paper will provide a generic set of cracking pressures for both positive and negative pressure relief valves that encompasses worst case environmental effects for a variety of launch / landing sites. Finally, several examples are included to highlight pressure relief set points and vehicle weight impacts for a selected set of orbital missions.
Fast marching methods for the continuous traveling salesman problem.
Andrews, June; Sethian, J A
2007-01-23
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ("cities") in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M.N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.
2012-04-30
DoD SERC Aeronautics & Astronautics 5/16/2012 NPS 9th Annual Acquisition Research Symposium...0.6 0.7 0.8 0.9 1 0 60 120 180 240 300 360 420 480 540 600 Pr ob ab ili ty to c om pl et e a m is si on Time (mins) architecture 1 architecture 2...1 6 11 /1 6 12 /1 6 13 /1 6 14 /1 6 15 /1 6 1Pr ob ab ili ty to c om pl et e a m is si on % of system failures worst-case in arch1 worst-case in
A review of management of infertility in Nigeria: framing the ethics of a national health policy.
Akinloye, Oluyemi; Truter, Ernest J
2011-01-01
Infertility has recently been construed to be a serious problem in sub-Saharan Africa. This problem seems to be viewed as of low priority with reference to the effective and efficient allocation of available health resources by national governments as well as by international donors sponsoring either research or service delivery in the public health sector. In this paper the problem of infertility in Nigeria is surveyed with a view to assessing the ethical dimension of proposals to manage infertility as a public sector priority in health care delivery. The population/individual and public/private distinction in the formulation of health policy has ethical implications that cannot simply be ignored and are therefore engaged in critically assessing the problem of infertility. Cost-utility analysis (such as Quality Adjusted Life-Year composite index) in the management of infertility in Nigeria entails the need for caution relevant to the country's efforts to achieve Millennium Development Goals. This should remain the case whether the ethical evaluation appeals to utilitarian or contractarian (Rawlsian) principles. The "worst off " category of Nigerians includes (1) underweight children less than 5 years of age, with special concern for infants (0-1 years of age) and (2) the proportion of the population below a minimum level of dietary consumption. The Rawlsian ethic implies that any Federal Ministry of Health policy aimed at establishing public programs for infertility management can be considered a "fair" allocation and expenditure if, and only if, the situation for these two cohorts is not thereby made worse. Nigerian health policy cannot assume this type of increased allocation of its resources to infertility care without it being hard pressed to warrant defensible moral or rational argument.
Improving Life-Cycle Cost Management of Spacecraft Missions
NASA Technical Reports Server (NTRS)
Clardy, Dennon
2010-01-01
This presentation will explore the results of a recent NASA Life-Cycle Cost study and how project managers can use the findings and recommendations to improve planning and coordination early in the formulation cycle and avoid common pitfalls resulting in cost overruns. The typical NASA space science mission will exceed both the initial estimated and the confirmed life-cycle costs by the end of the mission. In a fixed-budget environment, these overruns translate to delays in starting or launching future missions, or in the worst case can lead to cancelled missions. Some of these overruns are due to issues outside the control of the project; others are due to the unpredictable problems (unknown unknowns) that can affect any development project. However, a recent study of life-cycle cost growth by the Discovery and New Frontiers Program Office identified a number of areas that are within the scope of project management to address. The study also found that the majority of the underlying causes for cost overruns are embedded in the project approach during the formulation and early design phases, but the actual impacts typically are not experienced until late in the project life cycle. Thus, project management focus in key areas such as integrated schedule development, management structure and contractor communications processes, heritage and technology assumptions, and operations planning, can be used to validate initial cost assumptions and set in place management processes to avoid the common pitfalls resulting in cost overruns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahi-Anwar, M; Young, S; Lo, P
Purpose: A method to discriminate different types of renal cell carcinoma (RCC) was developed using attenuation values observed in multiphasic contrast-enhanced CT. This work evaluates the sensitivity of this RCC discrimination task at different CT radiation dose levels. Methods: We selected 5 cases of kidney lesion patients who had undergone four-phase CT scans covering the abdomen to the lilac crest. Through an IRB-approved study, the scans were conducted on 64-slice CT scanners (Definition AS/Definition Flash, Siemens Healthcare) using automatic tube-current modulation (TCM). The protocol included an initial baseline unenhanced scan, followed by three post-contrast injection phases. CTDIvol (32 cm phantom)more » measured between 9 to 35 mGy for any given phase. As a preliminary study, we limited the scope to the cortico-medullary phase—shown previously to be the most discriminative phase. A previously validated method was used to simulate a reduced dose acquisition via adding noise to raw CT sinogram data, emulating corresponding images at simulated doses of 50%, 25%, and 10%. To discriminate the lesion subtype, ROIs were placed in the most enhancing region of the lesion. The mean HU value of an ROI was extracted and used to discriminate to the worst-case RCC subtype, ranked in the order of clear cell, papillary, chromophobe and the benign oncocytoma. Results: Two patients exhibited a change of worst case RCC subtype between original and simulated scans, at 25% and 10% doses. In one case, the worst-case RCC subtype changed from oncocytoma to chromophobe at 10% and 25% doses, while the other case changed from oncocytoma to clear cell at 10% dose. Conclusion: Based on preliminary results from an initial cohort of 5 patients, worst-case RCC subtypes remained constant at all simulated dose levels except for 2 patients. Further study conducted on more patients will be needed to confirm our findings. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant Support from: U01 CA181156.« less
Frost damage in citric and olive production as the result of climate degradation
NASA Astrophysics Data System (ADS)
Saa Requejo, A.; Díaz Alvarez, M. C.; Tarquis, A. M.; Burgaz Moreno, F.; Garcia Moreno, R.
2009-04-01
Low temperature is one of the chief limiting factors in plant distribution. Freezing temperature shortens the growing season and may lower the yield and quality of any number of fruit crops. Minimum temperatures records for the Spanish region of Murcia were studied as limiting factor in fruit production. An analysis of temperature series since 1935 showed that the range of the absolute minimum temperatures (Tmin) on frost days in the target year, namely -0.5 °C to -4.0°C, was statistically similar to the range recorded in 1993, while the mean minimum temperatures (tmin) were found to have risen. The historical series also showed the mean minimum temperatures (tmin) to have increased, however. Through 1985, tmin ranged from 4.0 to -2.0 °C, depending on the area, while these limits shifted in more recent years to 7.0 - 0.5 °C. This increase in mean temperature produced that the frost episodes in March 2004 was considered by lemon, mandarin and olive producers as the worst in many years for frost damage since the minimum temperature was reached in a more sensitive phenological stage, despite the statistical evidence that similar freezing temperatures had been reached on similar dates in other years.
Uncertainty analysis of least-cost modeling for designing wildlife linkages.
Beier, Paul; Majka, Daniel R; Newell, Shawn L
2009-12-01
Least-cost models for focal species are widely used to design wildlife corridors. To evaluate the least-cost modeling approach used to develop 15 linkage designs in southern California, USA, we assessed robustness of the largest and least constrained linkage. Species experts parameterized models for eight species with weights for four habitat factors (land cover, topographic position, elevation, road density) and resistance values for each class within a factor (e.g., each class of land cover). Each model produced a proposed corridor for that species. We examined the extent to which uncertainty in factor weights and class resistance values affected two key conservation-relevant outputs, namely, the location and modeled resistance to movement of each proposed corridor. To do so, we compared the proposed corridor to 13 alternative corridors created with parameter sets that spanned the plausible ranges of biological uncertainty in these parameters. Models for five species were highly robust (mean overlap 88%, little or no increase in resistance). Although the proposed corridors for the other three focal species overlapped as little as 0% (mean 58%) of the alternative corridors, resistance in the proposed corridors for these three species was rarely higher than resistance in the alternative corridors (mean difference was 0.025 on a scale of 1 10; worst difference was 0.39). As long as the model had the correct rank order of resistance values and factor weights, our results suggest that the predicted corridor is robust to uncertainty. The three carnivore focal species, alone or in combination, were not effective umbrellas for the other focal species. The carnivore corridors failed to overlap the predicted corridors of most other focal species and provided relatively high resistance for the other focal species (mean increase of 2.7 resistance units). Least-cost modelers should conduct uncertainty analysis so that decision-makers can appreciate the potential impact of model uncertainty on conservation decisions. Our approach to uncertainty analysis (which can be called a worst-case scenario approach) is appropriate for complex models in which distribution of the input parameters cannot be specified.
Tompa, Emile; Dolinschi, Roman; Alamgir, Hasanat; Sarnocinska-Hart, Anna; Guzman, Jaime
2016-05-01
To evaluate whether a peer-coaching programme for patient lift use in British Columbia, Canada, was effective and cost-beneficial. We used monthly panel data from 15 long-term care facilities from 2004 to 2011 to estimate the number of patient-handling injuries averted by the peer-coaching programme using a generalised estimating equation model. Facilities that had not yet introduced the programme served as concurrent controls. Accepted lost-time claim counts related to patient handling were the outcome of interest with a denominator of full-time equivalents of nursing staff. A cost-benefit approach was used to estimate the net monetary gains at the system level. The coaching programme was found to be associated with a reduction in the injury rate of 34% during the programme and 56% after the programme concluded with an estimated 62 lost-time injury claims averted. 2 other factors were associated with changes in injury rates: larger facilities had a lower injury rate, and the more care hours per bed the lower the injury rate. We calculated monetary benefits to the system of $748 431 and costs of $894 000 (both in 2006 Canadian dollars) with a benefit-to-cost ratio of 0.84. The benefit-to-cost ratio was -0.05 in the worst case scenario and 2.31 in the best case scenario. The largest cost item was peer coaches' time. A simulation of the programme continuing for 5 years with the same coaching intensity would result in a benefit-to-cost ratio of 0.63. A peer-coaching programme to increase effective use of overhead lifts prevented additional patient-handling injuries but added modest incremental cost to the system. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-03-01
Estimation of the costs associated with implementation of the Resource Conservation and Recovery Act (RCRA) regulations for non-hazardous and hazardous material disposal in the utility industry are provided. These costs are based on engineering studies at a number of coal-fired power plants in which the costs for hazardous and non-hazardous disposal are compared to the costs developed for the current practice design for each utility. The relationship of the three costs is displayed. The emphasis of this study is on the determination of incremental costs rather than the absolute costs for each case (current practice, non-hazardous, or hazardous). For themore » purpose of this project, the hazardous design cost was determined for both minimum and maximum compliance.« less
Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino
2017-03-01
Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Saraswati, D.; Sari, D. K.; Johan, V.
2017-11-01
The study was conducted on a manufacturer that produced various kinds of kitchenware with kitchen sink as the main product. There were four types of steel sheets selected as the raw materials of the kitchen sink. The problem was the manufacturer wanted to determine how much steel sheets to order from a single supplier to meet the production requirements in a way to minimize the total inventory cost. In this case, the economic order quantity (EOQ) model was developed using all-unit discount as the price of steel sheets and the warehouse capacity was limited. Genetic algorithm (GA) was used to find the minimum of the total inventory cost as a sum of purchasing cost, ordering cost, holding cost and penalty cost.
Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation
NASA Astrophysics Data System (ADS)
Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro
2018-02-01
We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.
[Home health resource utilization measures using a case-mix adjustor model].
You, Sun-Ju; Chang, Hyun-Sook
2005-08-01
The purpose of this study was to measure home health resource utilization using a Case-Mix Adjustor Model developed in the U.S. The subjects of this study were 484 patients who had received home health care more than 4 visits during a 60-day episode at 31 home health care institutions. Data on the 484 patients had to be merged onto a 60-day payment segment. Based on the results, the researcher classified home health resource groups (HHRG). The subjects were classified into 34 HHRGs in Korea. Home health resource utilization according to clinical severity was in order of Minimum (C0) < 'Low (C1) < 'Moderate (C2) < 'High (C3), according to dependency in daily activities was in order of Minimum (F0) < 'High (F3) < 'Medium (F2) < 'Low (F1) < 'Maximum (F4). Resource utilization by HHRGs was the highest 564,735 won in group C0F0S2 (clinical severity minimum, dependency in daily activity minimum, service utilization moderate), and the lowest 97,000 won in group C2F3S1, so the former was 5.82 times higher than the latter. Resource utilization in home health care has become an issue of concern due to rising costs for home health care. The results suggest the need for more analytical attention on the utilization and expenditures for home care using a Case-Mix Adjustor Model.
Sisk, Jane E; Whang, William; Butler, Jay C; Sneller, Vishnu-Priya; Whitney, Cynthia G
2003-06-17
Guidelines are increasingly recommending preventive services starting at 50 years of age, and policymakers are considering such a recommendation for pneumococcal polysaccharide vaccination. The finding that pneumococcal vaccination is cost-saving for people 65 years of age or older raises the question of the vaccination's implications for other older adults, especially black people, whose disease incidence exceeds that of nonblack people, and those with high-risk conditions. To assess the implications of vaccinating black and nonblack people 50 through 64 years of age against invasive pneumococcal disease. Cost-effectiveness analysis. Published literature for vaccination effectiveness and cost estimates; data on disease incidence and case-fatality rates from the Centers for Disease Control and Prevention. Hypothetical cohort 50 through 64 years of age with the 1995 U.S. age distribution. Lifetime. Societal. Pneumococcal polysaccharide vaccination compared with no vaccination. Incremental medical costs and health effects, in quality-adjusted life-years per vaccinee. Vaccination saved medical costs and improved health among high-risk black people (27.55 dollars savings per vaccinee) and nonblack people (5.92 dollars savings per vaccinee), excluding survivors' future costs. For low-risk black and nonblack people and the overall general population, vaccination cost 2477 dollars, 8195 dollars, and 3434 dollars, respectively, to gain 1 year of healthy life. Excluding survivors' future costs, in the general immunocompetent population, cost per quality-adjusted life-year in global worst-case results ranged from 21 513 dollars for black people to 68 871 dollars for nonblack people; in the high-risk population, cost ranged from 11 548 dollars for black people to 39 000 dollars for nonblack people. In the global best case, vaccination was cost-saving for black and nonblack people in the general immunocompetent and high-risk populations, excluding survivors' future costs. The cost-effectiveness range was narrower in probabilistic sensitivity analyses, with 95% probabilistic intervals ranging from cost-saving to 1594 dollars for black people and from cost-saving to 12 273 dollars for nonblack people in the general immunocompetent population. Costs per quality-adjusted life-year for low-risk people with case-fatality rates from 1998 were 2477 dollars for black people and 8195 dollars for nonblack people, excluding survivors' medical costs. These results support the current recommendation to vaccinate high-risk people and provide useful information for considering extending the recommendation to the general population 50 through 64 years of age. Lack of evidence about the effectiveness of revaccination for people 65 years of age or older, when disease risks are higher, argues for further research to guide vaccination policy.
Societal costs of underage drinking.
Miller, Ted R; Levy, David T; Spicer, Rebecca S; Taylor, Dexter M
2006-07-01
Despite minimum-purchase-age laws, young people regularly drink alcohol. This study estimated the magnitude and costs of problems resulting from underage drinking by category-traffic crashes, violence, property crime, suicide, burns, drownings, fetal alcohol syndrome, high-risk sex, poisonings, psychoses, and dependency treatment-and compared those costs with associated alcohol sales. Previous studies did not break out costs of alcohol problems by age. For each category of alcohol-related problems, we estimated fatal and nonfatal cases attributable to underage alcohol use. We multiplied alcohol-attributable cases by estimated costs per case to obtain total costs for each problem. Underage drinking accounted for at least 16% of alcohol sales in 2001. It led to 3,170 deaths and 2.6 million other harmful events. The estimated $61.9 billion bill (relative SE = 18.5%) included $5.4 billion in medical costs, $14.9 billion in work loss and other resource costs, and $41.6 billion in lost quality of life. Quality-of-life costs, which accounted for 67% of total costs, required challenging indirect measurement. Alcohol-attributable violence and traffic crashes dominated the costs. Leaving aside quality of life, the societal harm of $1 per drink consumed by an underage drinker exceeded the average purchase price of $0.90 or the associated $0.10 in tax revenues. Recent attention has focused on problems resulting from youth use of illicit drugs and tobacco. In light of the associated substantial injuries, deaths, and high costs to society, youth drinking behaviors merit the same kind of serious attention.
Satellite broadcasting system study
NASA Technical Reports Server (NTRS)
1972-01-01
The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.
Risk to the public from carbon fibers released in civil aircraft accidents
NASA Technical Reports Server (NTRS)
1980-01-01
Because carbon fibers are strong, stiff, and lightweight, they are attractive for use in composite structures. Because they also have high electrical conductivity, free carbon fibers settling on electrical conductors can cause malfunctions. If released from the composite by burning, the fibers may become a hazard to exposed electrical and electronic equipment. As part of a Federal study of the potential hazard associated with the use of carbon fibers, NASA assessed the public risk associated with crash fire accidents of civil aircraft. The NASA study projected a dramatic increase in the use of carbon composites in civil aircraft and developed technical data to support the risk assessment. Personal injury was found to be extremely unlikely. In 1993, the year chosen as a focus for the study, the expected annual cost of damage caused by released carbon fibers is only $1000. Even the worst-case carbon fiber incident simulated (costing $178,000 once in 34,000 years) was relatively low-cost compared with the usual air transport accident cost. On the basis of these observations, the NASA study concluded that exploitation of composites should continue, that additional protection of avionics is unnecessary, and that development of alternate materials specifically to overcome this problem is not justified.
A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network
NASA Astrophysics Data System (ADS)
Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.
Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.
A minimum cost tolerance allocation method for rocket engines and robust rocket engine design
NASA Technical Reports Server (NTRS)
Gerth, Richard J.
1993-01-01
Rocket engine design follows three phases: systems design, parameter design, and tolerance design. Systems design and parameter design are most effectively conducted in a concurrent engineering (CE) environment that utilize methods such as Quality Function Deployment and Taguchi methods. However, tolerance allocation remains an art driven by experience, handbooks, and rules of thumb. It was desirable to develop and optimization approach to tolerancing. The case study engine was the STME gas generator cycle. The design of the major components had been completed and the functional relationship between the component tolerances and system performance had been computed using the Generic Power Balance model. The system performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were an initial set of component tolerances. However, the question was whether there existed an optimal combination of tolerances that would result in the minimum cost without any degradation in system performance.
Risk calculation variability over time in ocular hypertensive subjects.
Song, Christian; De Moraes, Carlos Gustavo; Forchheimer, Ilana; Prata, Tiago S; Ritch, Robert; Liebmann, Jeffrey M
2014-01-01
To investigate the longitudinal variability of glaucoma risk calculation in ocular hypertensive (OHT) subjects. We reviewed the charts of untreated OHT patients followed in a glaucoma referral practice for a minimum of 60 months. Clinical variables collected at baseline and during follow-up included age, central corneal thickness (CCT), intraocular pressure (IOP), vertical cup-to-disc ratio (VCDR), and visual field pattern standard deviation (VFPSD). These were used to calculate the 5-year risk of conversion to primary open-angle glaucoma (POAG) at each follow-up visit using the Ocular Hypertension Treatment Study and European Glaucoma Prevention Study calculator (http://ohts.wustl.edu/risk/calculator.html). We also calculated the risk of POAG conversion based on the fluctuation of measured variables over time assuming the worst case scenarios (final age, highest PSD, lowest CCT, highest IOP, and highest VCDR) and best case scenarios (baseline age, lowest PSD, highest CCT, lowest IOP, and lowest VCDR) for each patient. Risk probabilities (%) were plotted against follow-up time to generate slopes of risk change over time. We included 27 untreated OHT patients (54 eyes) followed for a mean of 98.3±18.5 months. Seven individuals (25.9%) converted to POAG during follow-up. The mean 5-year risk of conversion for all patients in the study group ranged from 2.9% to 52.3% during follow-up. The mean slope of risk change over time was 0.37±0.81% increase/y. The mean slope for patients who reached a POAG endpoint was significantly greater than for those who did not (1.3±0.78 vs. 0.042±0.52%/y, P<0.01). In each patient, the mean risk of POAG conversion increased almost 10-fold when comparing the best case scenario with the worst case scenario (5.0% vs. 45.7%, P<0.01). The estimated 5-year risk of conversion to POAG among untreated OHT patients varies significantly during follow-up, with a trend toward increasing over time. Within the same individual, the estimated risk can vary almost 10-fold based on the variability of IOP, CCT, VCDR, and VFPSD. Therefore, a single risk calculation measurement may not be sufficient for accurate risk assessment, informed decision-making by patients, and physician treatment recommendations.
A Worst-Case Approach for On-Line Flutter Prediction
NASA Technical Reports Server (NTRS)
Lind, Rick C.; Brenner, Martin J.
1998-01-01
Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.
Conceptual design of multi-source CCS pipeline transportation network for Polish energy sector
NASA Astrophysics Data System (ADS)
Isoli, Niccolo; Chaczykowski, Maciej
2017-11-01
The aim of this study was to identify an optimal CCS transport infrastructure for Polish energy sector in regards of selected European Commission Energy Roadmap 2050 scenario. The work covers identification of the offshore storage site location, CO2 pipeline network design and sizing for deployment at a national scale along with CAPEX analysis. It was conducted for the worst-case scenario, wherein the power plants operate under full-load conditions. The input data for the evaluation of CO2 flow rates (flue gas composition) were taken from the selected cogeneration plant with the maximum electric capacity of 620 MW and the results were extrapolated from these data given the power outputs of the remaining units. A graph search algorithm was employed to estimate pipeline infrastructure costs to transport 95 MT of CO2 annually, which amount to about 612.6 M€. Additional pipeline infrastructure costs will have to be incurred after 9 years of operation of the system due to limited storage site capacity. The results show that CAPEX estimates for CO2 pipeline infrastructure cannot be relied on natural gas infrastructure data, since both systems exhibit differences in pipe wall thickness that affects material cost.
Fuel-Efficient Descent and Landing Guidance Logic for a Safe Lunar Touchdown
NASA Technical Reports Server (NTRS)
Lee, Allan Y.
2011-01-01
The landing of a crewed lunar lander on the surface of the Moon will be the climax of any Moon mission. At touchdown, the landing mechanism must absorb the load imparted on the lander due to the vertical component of the lander's touchdown velocity. Also, a large horizontal velocity must be avoided because it could cause the lander to tip over, risking the life of the crew. To be conservative, the worst-case lander's touchdown velocity is always assumed in designing the landing mechanism, making it very heavy. Fuel-optimal guidance algorithms for soft planetary landing have been studied extensively. In most of these studies, the lander is constrained to touchdown with zero velocity. With bounds imposed on the magnitude of the engine thrust, the optimal control solutions typically have a "bang-bang" thrust profile: the thrust magnitude "bangs" instantaneously between its maximum and minimum magnitudes. But the descent engine might not be able to throttle between its extremes instantaneously. There is also a concern about the acceptability of "bang-bang" control to the crew. In our study, the optimal control of a lander is formulated with a cost function that penalizes both the touchdown velocity and the fuel cost of the descent engine. In this formulation, there is not a requirement to achieve a zero touchdown velocity. Only a touchdown velocity that is consistent with the capability of the landing gear design is required. Also, since the nominal throttle level for the terminal descent sub-phase is well below the peak engine thrust, no bound on the engine thrust is used in our formulated problem. Instead of bangbang type solution, the optimal thrust generated is a continuous function of time. With this formulation, we can easily derive analytical expressions for the optimal thrust vector, touchdown velocity components, and other system variables. These expressions provide insights into the "physics" of the optimal landing and terminal descent maneuver. These insights could help engineers to achieve a better "balance" between the conflicting needs of achieving a safe touchdown velocity, a low-weight landing mechanism, low engine fuel cost, and other design goals. In comparing the computed optimal control results with the preflight landing trajectory design of the Apollo-11 mission, we noted interesting similarities between the two missions.
Benefits of investing in ecosystem restoration.
DE Groot, Rudolf S; Blignaut, James; VAN DER Ploeg, Sander; Aronson, James; Elmqvist, Thomas; Farley, Joshua
2013-12-01
Measures aimed at conservation or restoration of ecosystems are often seen as net-cost projects by governments and businesses because they are based on incomplete and often faulty cost-benefit analyses. After screening over 200 studies, we examined the costs (94 studies) and benefits (225 studies) of ecosystem restoration projects that had sufficient reliable data in 9 different biomes ranging from coral reefs to tropical forests. Costs included capital investment and maintenance of the restoration project, and benefits were based on the monetary value of the total bundle of ecosystem services provided by the restored ecosystem. Assuming restoration is always imperfect and benefits attain only 75% of the maximum value of the reference systems over 20 years, we calculated the net present value at the social discount rates of 2% and 8%. We also conducted 2 threshold cum sensitivity analyses. Benefit-cost ratios ranged from about 0.05:1 (coral reefs and coastal systems, worst-case scenario) to as much as 35:1 (grasslands, best-case scenario). Our results provide only partial estimates of benefits at one point in time and reflect the lower limit of the welfare benefits of ecosystem restoration because both scarcity of and demand for ecosystem services is increasing and new benefits of natural ecosystems and biological diversity are being discovered. Nonetheless, when accounting for even the incomplete range of known benefits through the use of static estimates that fail to capture rising values, the majority of the restoration projects we analyzed provided net benefits and should be considered not only as profitable but also as high-yielding investments. Beneficios de Invertir en la Restauración de Ecosistemas. © 2013 Society for Conservation Biology.
Aid to planning the marketing of mining area boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giles, R.H. Jr.
Reducing trespass, legal costs, and timber and wildlife poaching and increasing control, safety, and security are key reasons why mine land boundaries need to be marked. Accidents may be reduced, especially when associated with blast area boundaries, and in some cases increased income may be gained from hunting and recreational fees on well-marked areas. A BASIC computer program for an IBM-PC has been developed that requires minimum inputs to estimate boundary marking costs. This paper describes the rationale for the program and shows representative outputs. 3 references, 3 tables.
Boehmler, Erick M.; Severance, Timothy
1997-01-01
Contraction scour for all modelled flows ranged from 3.8 to 6.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.0 to 6.7 ft. The worst-case abutment scour also occurred at the 500-year discharge. Pier scour ranged from 9.1 to 10.2. The worst-case pier scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.; Hammond, Robert E.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.9 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour at the left abutment ranged from 3.1 to 10.3 ft. with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 6.4 to 10.4 ft. with the worst-case occurring at the 100-year discharge.Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Medalie, Laura
1997-01-01
Contraction scour for the modelled flows ranged from 1.0 to 2.7 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour ranged from 8.4 to 17.6 ft. The worst-case abutment scour for the right abutment occurred at the incipient-overtopping discharge. For the left abutment, the worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, R.L.; Medalie, Laura
1998-01-01
Contraction scour for all modelled flows ranged from 0.0 to 2.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 6.7 to 8.7 ft. The worst-case left abutment scour occurred at the incipient roadway-overtopping discharge. Right abutment scour ranged from 7.8 to 9.5 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and Davis, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Jain, Siddharth; Kilgore, Meredith; Edwards, Rodney K; Owen, John
2016-07-01
Preterm birth (PTB) is a significant cause of neonatal morbidity and mortality. Studies have shown that vaginal progesterone therapy for women diagnosed with shortened cervical length can reduce the risk of PTB. However, published cost-effectiveness analyses of vaginal progesterone for short cervix have not considered an appropriate range of clinically important parameters. To evaluate the cost-effectiveness of universal cervical length screening in women without a history of spontaneous PTB, assuming that all women with shortened cervical length receive progesterone to reduce the likelihood of PTB. A decision analysis model was developed to compare universal screening and no-screening strategies. The primary outcome was the cost-effectiveness ratio of both the strategies, defined as the estimated patient cost per quality-adjusted life-year (QALY) realized by the children. One-way sensitivity analyses were performed by varying progesterone efficacy to prevent PTB. A probabilistic sensitivity analysis was performed to address uncertainties in model parameter estimates. In our base-case analysis, assuming that progesterone reduces the likelihood of PTB by 11%, the incremental cost-effectiveness ratio for screening was $158,000/QALY. Sensitivity analyses show that these results are highly sensitive to the presumed efficacy of progesterone to prevent PTB. In a 1-way sensitivity analysis, screening results in cost-saving if progesterone can reduce PTB by 36%. Additionally, for screening to be cost-effective at WTP=$60,000 in three clinical scenarios, progesterone therapy has to reduce PTB by 60%, 34% and 93%. Screening is never cost-saving in the worst-case scenario or when serial ultrasounds are employed, but could be cost-saving with a two-day hospitalization only if progesterone were 64% effective. Cervical length screening and treatment with progesterone is a not a dominant, cost-effective strategy unless progesterone is more effective than has been suggested by available data for US women. Until future trials demonstrate greater progesterone efficacy, and effectiveness studies confirm a benefit from screening and treatment, the cost-effectiveness of universal cervical length screening in the United States remains questionable. Copyright © 2016 Elsevier Inc. All rights reserved.
Jain, Dhruv; Tikku, Gargi; Bhadana, Pallavi; Dravid, Chandrashekhar; Grover, Rajesh Kumar
2017-08-01
We investigated World Health Organization (WHO) grading and pattern of invasion based histological schemes as independent predictors of disease-free survival, in oral squamous carcinoma patients. Tumor resection slides of eighty-seven oral squamous carcinoma patients [pTNM: I&II/III&IV-32/55] were evaluated. Besides examining various patterns of invasion, invasive front grade, predominant and worst (highest) WHO grade were recorded. For worst WHO grading, poor-undifferentiated component was estimated semi-quantitatively at advancing tumor edge (invasive growth front) in histology sections. Tumor recurrence was observed in 31 (35.6%) cases. The 2-year disease-free survival was 47% [Median: 656; follow-up: 14-1450] days. Using receiver operating characteristic curves, we defined poor-undifferentiated component exceeding 5% of tumor as the cutoff to assign an oral squamous carcinoma as grade-3, when following worst WHO grading. Kaplan-Meier curves for disease-free survival revealed prognostic association with nodal involvement, tumor size, worst WHO grading; most common pattern of invasion and invasive pattern grading score (sum of two most predominant patterns of invasion). In further multivariate analysis, tumor size (>2.5cm) and worst WHO grading (grade-3 tumors) independently predicted reduced disease-free survival [HR, 2.85; P=0.028 and HR, 3.37; P=0.031 respectively]. The inter-observer agreement was moderate for observers who semi-quantitatively estimated percentage of poor-undifferentiated morphology in oral squamous carcinomas. Our results support the value of semi-quantitative method to assign tumors as grade-3 with worst WHO grading for predicting reduced disease-free survival. Despite limitations, of the various histological tumor stratification schemes, WHO grading holds adjunctive value for its prognostic role, ease and universal familiarity. Copyright © 2017 Elsevier Inc. All rights reserved.
Detection of MAVs (Micro Aerial Vehicles) based on millimeter wave radar
NASA Astrophysics Data System (ADS)
Noetel, Denis; Johannes, Winfried; Caris, Michael; Hommes, Alexander; Stanko, Stephan
2016-10-01
In this paper we present two system approaches for perimeter surveillance with radar techniques focused on the detection of Micro Aerial Vehicles (MAVs). The main task of such radars is to detect movements of targets such as an individual or a vehicle approaching a facility. The systems typically cover a range of several hundred meters up to several kilometers. In particular, the capability of identifying Remotely Piloted Aircraft Systems (RPAS), which pose a growing threat on critical infrastructure areas, is of great importance nowadays. The low costs, the ease of handling and a considerable payload make them an excellent tool for unwanted surveillance or attacks. Most platforms can be equipped with all kind of sensors or, in the worst case, with destructive devices. A typical MAV is able to take off and land vertically, to hover, and in many cases to fly forward at high speed. Thus, it can reach all kinds of places in short time while the concealed operator of the MAV resides at a remote and riskless place.
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-06-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
NASA Technical Reports Server (NTRS)
Chetty, P. R. K.; Roufberg, Lew; Costogue, Ernest
1991-01-01
The TOPEX mission requirements which impact the power requirements and analyses are presented. A description of the electrical power system (EPS), including energy management and battery charging methods that were conceived and developed to meet the identified satellite requirements, is included. Analysis of the TOPEX EPS confirms that all of its electrical performance and reliability requirements have been met. The TOPEX EPS employs the flight-proven modular power system (MPS) which is part of the Multimission Modular Spacecraft and provides high reliability, abbreviated development effort and schedule, and low cost. An energy balance equation, unique to TOPEX, has been derived to confirm that the batteries will be completely recharged following each eclipse, under worst-case conditions. TOPEX uses three NASA Standard 50AH Ni-Cd batteries, each with 22 cells in series. The MPS contains battery charge control and protection based on measurements of battery currents, voltages, temperatures, and computed depth-of-discharge. In case of impending battery depletion, the MPS automatically implements load shedding.
NASA Astrophysics Data System (ADS)
Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi
2010-01-01
In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.
Sørensen, Peter B; Thomsen, Marianne; Assmuth, Timo; Grieger, Khara D; Baun, Anders
2010-08-15
This paper helps bridge the gap between scientists and other stakeholders in the areas of human and environmental risk management of chemicals and engineered nanomaterials. This connection is needed due to the evolution of stakeholder awareness and scientific progress related to human and environmental health which involves complex methodological demands on risk management. At the same time, the available scientific knowledge is also becoming more scattered across multiple scientific disciplines. Hence, the understanding of potentially risky situations is increasingly multifaceted, which again challenges risk assessors in terms of giving the 'right' relative priority to the multitude of contributing risk factors. A critical issue is therefore to develop procedures that can identify and evaluate worst case risk conditions which may be input to risk level predictions. Therefore, this paper suggests a conceptual modelling procedure that is able to define appropriate worst case conditions in complex risk management. The result of the analysis is an assembly of system models, denoted the Worst Case Definition (WCD) model, to set up and evaluate the conditions of multi-dimensional risk identification and risk quantification. The model can help optimize risk assessment planning by initial screening level analyses and guiding quantitative assessment in relation to knowledge needs for better decision support concerning environmental and human health protection or risk reduction. The WCD model facilitates the evaluation of fundamental uncertainty using knowledge mapping principles and techniques in a way that can improve a complete uncertainty analysis. Ultimately, the WCD is applicable for describing risk contributing factors in relation to many different types of risk management problems since it transparently and effectively handles assumptions and definitions and allows the integration of different forms of knowledge, thereby supporting the inclusion of multifaceted risk components in cumulative risk management. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
Identifying Opportunities for Improving the Quality of Life of Older Age Groups.
ERIC Educational Resources Information Center
Flanagan, John C.
The situations and feelings of representative national samples of 50- and 70-year-olds were investigated in order to provide a representative, adequate data base for planning programs and policies that will result in maximum improvement in the quality of life of older Americans at minimum costs to taxpayers. Intensive case studies of 1,000…
Roland: A Case for or Against NATO Standardization?
1980-05-01
with often competing, even opposing, objectives in testing, financial auditing , cost estimating, reliability, value engineering, maintenance, training...supposedly mature system. Multilocation tests, early in the program when test beds and spare parts availability would be at a minimum, would require...Similar institutionalized conflicts resided in the audit community, which, under the Armed Services Procurement Regulation, was required to audit and
Managing risk in a challenging financial environment.
Kaufman, Kenneth
2008-08-01
Five strategies can help hospital financial leaders balance their organizations' financial and risk positions: Understand the hospital's financial condition; Determine the desired level of risk; Consider total risk; Use a portfolio approach; Explore best-case/worst-case scenarios to measure risk.
Intelligent Accountability in Education
ERIC Educational Resources Information Center
O'Neill, Onora
2013-01-01
Systems of accountability are "second order" ways of using evidence of the standard to which "first order" tasks are carried out for a great variety of purposes. However, more accountability is not always better, and processes of holding to account can impose high costs without securing substantial benefits. At their worst,…
De Schrijver, Adinda; Devos, Yann; De Clercq, Patrick; Gathmann, Achim; Romeis, Jörg
2016-08-01
The potential risks that genetically modified plants may pose to non-target organisms and the ecosystem services they contribute to are assessed as part of pre-market risk assessments. This paper reviews the early tier studies testing the hypothesis whether exposure to plant-produced Cry34/35Ab1 proteins as a result of cultivation of maize 59122 is harmful to valued non-target organisms, in particular Arthropoda and Annelida. The available studies were assessed for their scientific quality by considering a set of criteria determining their relevance and reliability. As a case-study, this exercise revealed that when not all quality criteria are met, weighing the robustness of the study and its relevance for risk assessment is not obvious. Applying a worst-case expected environmental concentration of bioactive toxins equivalent to that present in the transgenic crop, confirming exposure of the test species to the test substance, and the use of a negative control were identified as minimum criteria to be met to guarantee sufficiently reliable data. This exercise stresses the importance of conducting studies meeting certain quality standards as this minimises the probability of erroneous or inconclusive results and increases confidence in the results and adds certainty to the conclusions drawn.
Varela, José E; Page, Juan E; Esteban, Jaime
2010-09-01
The interaction between electromagnetic fields and biological media, particularly regarding very high power, short pulses as in radar signals, is not a fully understood phenomenon. In the past few years, many in vitro, cellular communications-oriented exposure studies have been carried out. This article presents a high-power waveguide exposure system capable of dealing with monochromatic, multicarrier or pulsed signals between 1.8 and 3.2 GHz (L- and S-band) with a pulse duration as low as 90 ns, minimum pulse repetition of 100 Hz, and maximum instantaneous power of 100 W. The setup is currently being used with a 2.2 GHz carrier modulated by 5 micros pulses with a 100 Hz repetition period and approximately 30 W of instantaneous power. After a worst-case temperature analysis, which does not account for conduction and convection thermal effects, the experiment's exposure is considered sub-thermal. Evaluation of the results through the specific absorption rate distribution is not considered sufficient enough in these cases. An electromagnetic field distribution analysis is needed. For monochromatic signals, the representation of the modulus of the electric and magnetic field components is proposed as a suitable method of assessment. 2010 Wiley-Liss, Inc.
The influence of climate change on Tanzania's hydropower sustainability
NASA Astrophysics Data System (ADS)
Sperna Weiland, Frederiek; Boehlert, Brent; Meijer, Karen; Schellekens, Jaap; Magnell, Jan-Petter; Helbrink, Jakob; Kassana, Leonard; Liden, Rikard
2015-04-01
Economic costs induced by current climate variability are large for Tanzania and may further increase due to future climate change. The Tanzanian National Climate Change Strategy addressed the need for stabilization of hydropower generation and strengthening of water resources management. Increased hydropower generation can contribute to sustainable use of energy resources and stabilization of the national electricity grid. To support Tanzania the World Bank financed this study in which the impact of climate change on the water resources and related hydropower generation capacity of Tanzania is assessed. To this end an ensemble of 78 GCM projections from both the CMIP3 and CMIP5 datasets was bias-corrected and down-scaled to 0.5 degrees resolution following the BCSD technique using the Princeton Global Meteorological Forcing Dataset as a reference. To quantify the hydrological impacts of climate change by 2035 the global hydrological model PCR-GLOBWB was set-up for Tanzania at a resolution of 3 minutes and run with all 78 GCM datasets. From the full set of projections a probable (median) and worst case scenario (95th percentile) were selected based upon (1) the country average Climate Moisture Index and (2) discharge statistics of relevance to hydropower generation. Although precipitation from the Princeton dataset shows deviations from local station measurements and the global hydrological model does not perfectly reproduce local scale hydrographs, the main discharge characteristics and precipitation patterns are represented well. The modeled natural river flows were adjusted for water demand and irrigation within the water resources model RIBASIM (both historical values and future scenarios). Potential hydropower capacity was assessed with the power market simulation model PoMo-C that considers both reservoir inflows obtained from RIBASIM and overall electricity generation costs. Results of the study show that climate change is unlikely to negatively affect the average potential of future hydropower production; it will likely make hydropower more profitable. Yet, the uncertainty in climate change projections remains large and risks are significant, adaptation strategies should ideally consider a worst case scenario to ensure robust power generation. Overall a diversified power generation portfolio, anchored in hydropower and supported by other renewables and fossil fuel-based energy sources, is the best solution for Tanzania
Can households earning minimum wage in Nova Scotia afford a nutritious diet?
Williams, Patricia L; Johnson, Christine P; Kratzmann, Meredith L V; Johnson, C Shanthi Jacob; Anderson, Barbara J; Chenhall, Cathy
2006-01-01
To assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia. Food costing data were collected in 43 randomly selected grocery stores throughout NS in 2002 using the National Nutritious Food Basket (NNFB). To estimate the affordability of a nutritious diet for households earning minimum wage, average monthly costs for essential expenses were subtracted from overall income to see if enough money remained for the cost of the NNFB. This was calculated for three types of household: 1) two parents and two children; 2) lone parent and two children; and 3) single male. Calculations were also made for the proposed 2006 minimum wage increase with expenses adjusted using the Consumer Price Index (CPI). The monthly cost of the NNFB priced in 2002 for the three types of household was 572.90 dollars, 351.68 dollars, and 198.73 dollars, respectively. Put into the context of basic living, these data showed that Nova Scotians relying on minimum wage could not afford to purchase a nutritious diet and meet their basic needs, placing their health at risk. These basic expenses do not include other routine costs, such as personal hygiene products, household and laundry cleaners, and prescriptions and costs associated with physical activity, education or savings for unexpected expenses. People working at minimum wage in Nova Scotia have not had adequate income to meet basic needs, including a nutritious diet. The 2006 increase in minimum wage to 7.15 dollars/hr is inadequate to ensure that Nova Scotians working at minimum wage are able to meet these basic needs. Wage increases and supplements, along with supports for expenses such as childcare and transportation, are indicated to address this public health problem.
BCD Beam Search: considering suboptimal partial solutions in Bad Clade Deletion supertrees.
Fleischauer, Markus; Böcker, Sebastian
2018-01-01
Supertree methods enable the reconstruction of large phylogenies. The supertree problem can be formalized in different ways in order to cope with contradictory information in the input. Some supertree methods are based on encoding the input trees in a matrix; other methods try to find minimum cuts in some graph. Recently, we introduced Bad Clade Deletion (BCD) supertrees which combines the graph-based computation of minimum cuts with optimizing a global objective function on the matrix representation of the input trees. The BCD supertree method has guaranteed polynomial running time and is very swift in practice. The quality of reconstructed supertrees was superior to matrix representation with parsimony (MRP) and usually on par with SuperFine for simulated data; but particularly for biological data, quality of BCD supertrees could not keep up with SuperFine supertrees. Here, we present a beam search extension for the BCD algorithm that keeps alive a constant number of partial solutions in each top-down iteration phase. The guaranteed worst-case running time of the new algorithm is still polynomial in the size of the input. We present an exact and a randomized subroutine to generate suboptimal partial solutions. Both beam search approaches consistently improve supertree quality on all evaluated datasets when keeping 25 suboptimal solutions alive. Supertree quality of the BCD Beam Search algorithm is on par with MRP and SuperFine even for biological data. This is the best performance of a polynomial-time supertree algorithm reported so far.
Option pricing: a flexible tool to disseminate shared savings contracts.
Friedberg, Mark W; Buendia, Anthony M; Lauderdale, Katherine E; Hussey, Peter S
2013-08-01
Due to volatility in healthcare costs, shared savings contracts can create systematic financial losses for payers, especially when contracting with smaller providers. To improve the business case for shared savings, we calculated the prices of financial options that payers can "sell" to providers to offset these losses. Using 2009 to 2010 member-level total cost of care data from a large commercial health plan, we calculated option prices by applying a bootstrap simulation procedure. We repeated these simulations for providers of sizes ranging from 500 to 60,000 patients and for shared savings contracts with and without key design features (minimum savings thresholds,bonus caps, cost outlier truncation, and downside risk) and under assumptions of zero, 1%, and 2% real cost reductions due to the shared savings contracts. Assuming no real cost reduction and a 50% shared savings rate, per patient option prices ranged from $225 (3.1% of overall costs) for 500-patient providers to $23 (0.3%) for 60,000-patient providers. Introducing minimum savings thresholds, bonus caps, cost outlier truncation, and downside risk reduced these option prices. Option prices were highly sensitive to the magnitude of real cost reductions. If shared savings contracts cause 2% reductions in total costs, option prices fall to zero for all but the smallest providers. Calculating the prices of financial options that protect payers and providers from downside risk can inject flexibility into shared savings contracts, extend such contracts to smaller providers, and clarify the tradeoffs between different contract designs, potentially speeding the dissemination of shared savings.
A Fully Coupled Multi-Rigid-Body Fuel Slosh Dynamics Model Applied to the Triana Stack
NASA Technical Reports Server (NTRS)
London, K. W.
2001-01-01
A somewhat general multibody model is presented that accounts for energy dissipation associated with fuel slosh and which unifies some of the existing more specialized representations. This model is used to predict the mutation growth time constant for the Triana Spacecraft, or Stack, consisting of the Triana Observatory mated with the Gyroscopic Upper Stage of GUS (includes the solid rocket motor, SRM, booster). At the nominal spin rate of 60 rpm and with 145 kg of hydrazine propellant on board, a time constant of 116 s is predicted for worst case sloshing of a spherical slug model compared to 1,681 s (nominal), 1,043 s (worst case) for sloshing of a three degree of freedom pendulum model.
Shokar, Navkiran K; Byrd, Theresa; Salaiz, Rebekah; Flores, Silvia; Chaparro, Maria; Calderon-Mora, Jessica; Reininger, Belinda; Dwivedi, Alok
2016-10-01
Colorectal cancer (CRC) is the second leading cause of cancer deaths in the USA. Screening is widely recommended but underutilized, particularly among the low income, the uninsured, recent immigrants and Hispanics. The study objective was to determine the effectiveness of a comprehensive community-wide, bilingual, CRC screening intervention among uninsured predominantly Hispanic individuals. This prospective study was embedded in a CRC screening program and utilized a quasi-experimental design. Recruitment occurred from Community and clinic sites. Inclusion criteria were aged 50-75years, uninsured, due for CRC screening, Texas address and exclusions were a history of CRC, or recent rectal bleeding. Eligible subjects were randomized to either promotora (P), video (V), or combined promotora and video (PV) education, and also received no-cost screening with fecal immunochemical testing or colonoscopy and navigation. The non-randomly allocated controls recruited from a similar county, received no intervention. The main outcome was 6month self-reported CRC screening. Per protocol and worst case scenario analyses, and logistic regression with covariate adjustment were performed. 784 subjects (467 in intervention group, 317 controls) were recruited; mean age was 56.8years; 78.4% were female, 98.7% were Hispanic and 90.0% were born in Mexico. In the worst case scenario analysis (n=784) screening uptake was 80.5% in the intervention group and 17.0% in the control group [relative risk 4.73, 95% CI: 3.69-6.05, P<0.001]. No educational group differences were observed. Covariate adjustment did not significantly alter the effect. A multicomponent community-wide, bilingual, CRC screening intervention significantly increased CRC screening in an uninsured predominantly Hispanic population. Copyright © 2016 Elsevier Inc. All rights reserved.
Li, Xue; Mupondwa, Edmund; Tabil, Lope
2018-02-01
This study undertakes technoeconomic analysis of commercial production of hydro-processed renewable jet (HRJ) fuel from camelina oil in the Canadian Prairies. An engineering economic model designed in SuperPro Designer® investigated capital investment, scale, and profitability of producing HRJ and co-products (biodiesel, naphtha, LPG, and propane) based on biorefinery plant sizes of 112.5-675 million L annum -1 . Under base case scenario, the minimum selling price (MSP) of HRJ was $1.06 L -1 for a biorefinery plant with size of 225 million L. However, it could range from $0.40 to $1.71 L -1 given variations in plant capacity, feedstock cost, and co-product credits. MSP is highly sensitive to camelina feedstock cost and co-product credits, with little sensitivity to capital cost, discount rate, plant capacity, and hydrogen cost. Marginal and average cost curves suggest the region could support an HRJ plant capacity of up to 675 million L annum -1 (capital investment of $167 million). Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Schild, S; Bues, M
Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Nair, Nisha; Kvizhinadze, Giorgi; Blakely, Tony
2016-12-01
To assess the cost-effectiveness of a cancer care coordinator (CCC) in helping women with estrogen receptor positive (ER+) early breast cancer persist with tamoxifen for 5 years. We investigated the cost-effectiveness of a CCC across eight breast cancer subtypes, defined by progesterone receptor (PR) status, human epidermal growth factor receptor 2 (HER2) status, and local/regional spread. These subtypes range from excellent to poorer prognoses. The CCC helped in improving tamoxifen persistence by providing information, checking-in by phone, and "troubleshooting" concerns. We constructed a Markov macrosimulation model to estimate health gain (in quality-adjusted life-years or QALYs) and health system costs in New Zealand, compared with no CCC. Participants were modeled until death or till the age of 110 years. Some input parameters (e.g., the impact of a CCC on tamoxifen persistence) had sparse evidence. Therefore, we used estimates with generous uncertainty and conducted sensitivity analyses. The cost-effectiveness of a CCC for regional ER+/PR-/HER2+ breast cancer (worst prognosis) was NZ $23,400 (US $15,800) per QALY gained, compared with NZ $368,500 (US $248,800) for local ER+/PR+/HER2- breast cancer (best prognosis). Using a cost-effectiveness threshold of NZ $45,000 (US $30,400) per QALY, a CCC would be cost-effective only in the four subtypes with the worst prognoses. There is value in investigating cost-effectiveness by different subtypes within a disease. In this example of breast cancer, the poorer the prognosis, the greater the health gains from a CCC and the better the cost-effectiveness. Incorporating heterogeneity in a cost-utility analysis is important and can inform resource allocation decisions. It is also feasible to undertake in practice. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, Henry
This research was mostly concerned with asymmetric vertical displacement event (AVDE) disruptions, which are the worst case scenario for producing a large asymmetric wall force. This is potentially a serious problem in ITER.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Fast marching methods for the continuous traveling salesman problem
Andrews, June; Sethian, J. A.
2007-01-01
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points (“cities”) in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M·N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh. PMID:17220271
Fast marching methods for the continuous traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, J.; Sethian, J.A.
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ('cities') in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the Traveling Salesman Problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both amore » heuristic and an optimal solution to this problem. The order of the heuristic algorithm is at worst case M * N logN, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.« less
Prato, S; La Valle, P; De Luca, E; Lattanzi, L; Migliore, G; Morgana, J G; Munari, C; Nicoletti, L; Izzo, G; Mistri, M
2014-03-15
The Water Framework Directive uses the "one-out, all-out" principle in assessing water bodies (i.e., the worst status of the elements used in the assessment determines the final status of the water body). In this study, we assessed the ecological status of two coastal lakes in Italy. Indices for all biological quality elements used in transitional waters from the Italian legislation and other European countries were employed and compared. Based on our analyses, the two lakes require restoration, despite the lush harbor seagrass beds, articulated macrobenthic communities and rich fish fauna. The "one-out, all-out" principle tends to inflate Type I errors, i.e., concludes that a water body is below the "good" status even if the water body actually has a "good" status. This may cause additional restoration costs where they are not necessarily needed. The results from this study strongly support the need for alternative approaches to the "one-out, all-out" principle. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multiple usage of the CD PLUS/UNIX system: performance in practice.
Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A
1995-01-01
In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable. PMID:8547902
Multiple usage of the CD PLUS/UNIX system: performance in practice.
Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A
1995-10-01
In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable.
Carter, D A; Hirst, I L
2000-01-07
This paper considers the application of one of the weighted risk indicators used by the Major Hazards Assessment Unit (MHAU) of the Health and Safety Executive (HSE) in formulating advice to local planning authorities on the siting of new major accident hazard installations. In such cases the primary consideration is to ensure that the proposed installation would not be incompatible with existing developments in the vicinity, as identified by the categorisation of the existing developments and the estimation of individual risk values at those developments. In addition a simple methodology, described here, based on MHAU's "Risk Integral" and a single "worst case" even analysis, is used to enable the societal risk aspects of the hazardous installation to be considered at an early stage of the proposal, and to determine the degree of analysis that will be necessary to enable HSE to give appropriate advice.
40 CFR 90.119 - Certification procedure-testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... must select the duty cycle that will result in worst-case emission results for certification. For any... facility, in which case instrumentation and equipment specified by the Administrator must be made available... manufacturers may not use any equipment, instruments, or tools to identify malfunctioning, maladjusted, or...
Williams, Peter
2010-11-01
Healthy food baskets have been used around the world for a variety of purposes, including: examining the difference in cost between healthy and unhealthy food; mapping the availability of healthy foods in different locations; calculating the minimum cost of an adequate diet for social policy planning; developing educational material on low cost eating and examining trends on food costs over time. In Australia, the Illawarra Healthy Food Basket was developed in 2000 to monitor trends in the affordability of healthy food compared to average weekly wages and social welfare benefits for the unemployed. It consists of 57 items selected to meet the nutritional requirements of a reference family of five. Bi-annual costing from 2000-2009 has shown that the basket costs have increased by 38.4% in the 10-year period, but that affordability has remained relatively constant at around 30% of average household incomes.
Williams, Peter
2010-01-01
Healthy food baskets have been used around the world for a variety of purposes, including: examining the difference in cost between healthy and unhealthy food; mapping the availability of healthy foods in different locations; calculating the minimum cost of an adequate diet for social policy planning; developing educational material on low cost eating and examining trends on food costs over time. In Australia, the Illawarra Healthy Food Basket was developed in 2000 to monitor trends in the affordability of healthy food compared to average weekly wages and social welfare benefits for the unemployed. It consists of 57 items selected to meet the nutritional requirements of a reference family of five. Bi-annual costing from 2000–2009 has shown that the basket costs have increased by 38.4% in the 10-year period, but that affordability has remained relatively constant at around 30% of average household incomes. PMID:22254001
Secondary electric power generation with minimum engine bleed
NASA Technical Reports Server (NTRS)
Tagge, G. E.
1983-01-01
Secondary electric power generation with minimum engine bleed is discussed. Present and future jet engine systems are compared. The role of auxiliary power units is evaluated. Details of secondary electric power generation systems with and without auxiliary power units are given. Advanced bleed systems are compared with minimum bleed systems. A cost model of ownership is given. The difference in the cost of ownership between a minimum bleed system and an advanced bleed system is given.
ERIC Educational Resources Information Center
Carey, Kevin
2013-01-01
For 40 years, federal money has sustained higher education while enabling its worst tendencies. That is about to change. The end may have come on February 12, 2013, when President Barack Obama delivered his State of the Union address. "Skyrocketing costs," the president said, "price way too many young people out of a higher education, or saddle…
Khan, D; Samadder, S R
2016-07-01
Collection of municipal solid waste is one of the most important elements of municipal waste management and requires maximum fund allocated for waste management. The cost of collection and transportation can be reduced in comparison with the present scenario if the solid waste collection bins are located at suitable places so that the collection routes become minimum. This study presents a suitable solid waste collection bin allocation method at appropriate places with uniform distance and easily accessible location so that the collection vehicle routes become minimum for the city Dhanbad, India. The network analyst tool set available in ArcGIS was used to find the optimised route for solid waste collection considering all the required parameters for solid waste collection efficiently. These parameters include the positions of solid waste collection bins, the road network, the population density, waste collection schedules, truck capacities and their characteristics. The present study also demonstrates the significant cost reductions that can be obtained compared with the current practices in the study area. The vehicle routing problem solver tool of ArcGIS was used to identify the cost-effective scenario for waste collection, to estimate its running costs and to simulate its application considering both travel time and travel distance simultaneously. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Gasbarri, Paolo; Monti, Riccardo; Campolo, Giovanni; Toglia, Chiara
2012-12-01
The design of large space structures (LSS) requires the use of design and analysis tools that include different disciplines. For such a kind of spacecrafts it is in fact mandatory that mechanical design and guidance navigation and control (GNC) design are developed within a common framework. One of the key-points in the development of LSS is related to the dynamic phenomena. These phenomena usually lead to two different interpretations. The former one is related to the overall motion of the spacecraft, i.e., the motion of the centre of gravity and motion around the centre of gravity. The latter one is related to the local motion of the elastic elements that leads to oscillations. These oscillations have in turn a disturbing effect on the motion of the spacecraft. From an engineering perspective, the structural model of flexible spacecrafts is generally obtained via FEM involving thousands of degrees of freedom (DOFs). Many of them are not significant from the attitude control point of view. One of the procedures to reduce the structural DOFs is tied to the modal decomposition technique. In the present paper a technique to develop a control-oriented structural model will be proposed. Starting from a detailed FE model of the spacecraft and using a special modal condensation approach, a continuous model is defined. With this transformation the number of DOFs necessary to study the coupled elastic/rigid dynamic is reduced. The final dynamic model will be suitable for the control design implementation. In order to properly design a satellite controller, it is important to recall that the characteristic parameters of the satellite are uncertain. The effect that uncertainties have on control performance must be investigated. A possible solution is that, after the attitude controller is designed on the nominal model, a Verification and Validation (V&V) process is performed to guarantee a correct functionality under a large number of scenarios. The V&V process can be very lengthy and expensive: difficulty and cost do increase because of the overall system dimension that depends on the number of uncertainties. Uncertain parameters have to be parametrically investigated to determine robust performance of the control laws via gridding approaches. In particular in this paper we propose to consider two methods: (i) a conventional Monte Carlo analysis, and (ii) a worst-case analysis, i.e., an optimization process to find an estimation of the true worst-case behaviour. Both techniques allow to verify that the design is robust enough to meet the system performance specification in case of uncertainties.
Ivanoff, Michael A.
1997-01-01
Contraction scour for all modelled flows ranged from 2.1 to 4.2 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 14.3 to 14.4 ft. The worst-case left abutment scour occurred at the incipient roadwayovertopping and 500-year discharge. Right abutment scour ranged from 15.3 to 18.5 ft. The worst-case right abutment scour occurred at the 100-year and the incipient roadwayovertopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) give “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.
MP3 player listening sound pressure levels among 10 to 17 year old students.
Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M
2011-11-01
Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤ 75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.
Boehmler, Erick M.; Weber, Matthew A.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.3 ft. The worst-case contraction scour occurred at the incipient overtopping discharge, which was less than the 100-year discharge. Abutment scour ranged from 6.2 to 9.4 ft. The worst-case abutment scour for the right abutment was 9.4 feet at the 100-year discharge. The worst-case abutment scour for the left abutment was 8.6 feet at the incipient overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Degnan, James R.
1997-01-01
Contraction scour for all modelled flows ranged from 2.6 to 4.6 ft. The worst-case contraction scour occurred at the incipient roadway-overtopping discharge. The left abutment scour ranged from 11.6 to 12.1 ft. The worst-case left abutment scour occurred at the incipient road-overtopping discharge. The right abutment scour ranged from 13.6 to 17.9 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Bergschmidt, Philipp; Dammer, Rebecca; Zietz, Carmen; Finze, Susanne; Mittelmeier, Wolfram; Bader, Rainer
2016-06-01
Evaluation of the adhesive strength of femoral components to the bone cement is a relevant parameter for predicting implant safety. In the present experimental study, three types of cemented femoral components (metallic, ceramic and silica/silane-layered ceramic) of the bicondylar Multigen Plus knee system, implanted on composite femora were analysed. A pull-off test with the femoral components was performed after different load and several cementing conditions (four groups and n=3 components of each metallic, ceramic and silica/silane-layered ceramic in each group). Pull-off forces were comparable for the metallic and the silica/silane-layered ceramic femoral components (mean 4769 N and 4298 N) under standard test condition, whereas uncoated ceramic femoral components showed reduced pull-off forces (mean 2322 N). Loading under worst-case conditions led to decreased adhesive strength by loosening of the interface implant and bone cement using uncoated metallic and ceramic femoral components, respectively. Silica/silane-coated ceramic components were stably fixed even under worst-case conditions. Loading under high flexion angles can induce interfacial tensile stress, which could promote early implant loosening. In conclusion, a silica/silane-coating layer on the femoral component increased their adhesive strength to bone cement. Thicker cement mantles (>2 mm) reduce adhesive strength of the femoral component and can increase the risk of cement break-off.
Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome.
Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack
2016-05-10
Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an 'overall' or 'worst' GS in biopsies series should be used. Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Using both 'worst' and 'overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the 'worst' grade is a valid prognostic measure.
Board Level Proton Testing Book of Knowledge for NASA Electronic Parts and Packaging Program
NASA Technical Reports Server (NTRS)
Guertin, Steven M.
2017-01-01
This book of knowledge (BoK) provides a critical review of the benefits and difficulties associated with using proton irradiation as a means of exploring the radiation hardness of commercial-off-the-shelf (COTS) systems. This work was developed for the NASA Electronic Parts and Packaging (NEPP) Board Level Testing for the COTS task. The fundamental findings of this BoK are the following. The board-level test method can reduce the worst case estimate for a board's single-event effect (SEE) sensitivity compared to the case of no test data, but only by a factor of ten. The estimated worst case rate of failure for untested boards is about 0.1 SEE/board-day. By employing the use of protons with energies near or above 200 MeV, this rate can be safely reduced to 0.01 SEE/board-day, with only those SEEs with deep charge collection mechanisms rising this high. For general SEEs, such as static random-access memory (SRAM) upsets, single-event transients (SETs), single-event gate ruptures (SEGRs), and similar cases where the relevant charge collection depth is less than 10 µm, the worst case rate for SEE is below 0.001 SEE/board-day. Note that these bounds assume that no SEEs are observed during testing. When SEEs are observed during testing, the board-level test method can establish a reliable event rate in some orbits, though all established rates will be at or above 0.001 SEE/board-day. The board-level test approach we explore has picked up support as a radiation hardness assurance technique over the last twenty years. The approach originally was used to provide a very limited verification of the suitability of low cost assemblies to be used in the very benign environment of the International Space Station (ISS), in limited reliability applications. Recently the method has been gaining popularity as a way to establish a minimum level of SEE performance of systems that require somewhat higher reliability performance than previous applications. This sort of application of the method suggests a critical analysis of the method is in order. This is also of current consideration because the primary facility used for this type of work, the Indiana University Cyclotron Facility (IUCF) (also known as the Integrated Science and Technology (ISAT) hall), has closed permanently, and the future selection of alternate test facilities is critically important. This document reviews the main theoretical work on proton testing of assemblies over the last twenty years. It augments this with review of reported data generated from the method and other data that applies to the limitations of the proton board-level test approach. When protons are incident on a system for test they can produce spallation reactions. From these reactions, secondary particles with linear energy transfers (LETs) significantly higher than the incident protons can be produced. These secondary particles, together with the protons, can simulate a subset of the space environment for particles capable of inducing single event effects (SEEs). The proton board-level test approach has been used to bound SEE rates, establishing a maximum possible SEE rate that a test article may exhibit in space. This bound is not particularly useful in many cases because the bound is quite loose. We discuss the established limit that the proton board-level test approach leaves us with. The remaining possible SEE rates may be as high as one per ten years for most devices. The situation is actually more problematic for many SEE types with deep charge collection. In cases with these SEEs, the limits set by the proton board-level test can be on the order of one per 100 days. Because of the limited nature of the bounds established by proton testing alone, it is possible that tested devices will have actual SEE sensitivity that is very low (e.g., fewer than one event in 1 × 10(exp 4) years), but the test method will only be able to establish the limits indicated above. This BoK further examines other benefits of proton board-level testing besides hardness assurance. The primary alternate use is the injection of errors. Error injection, or fault injection, is something that is often done in a simulation environment. But the proton beam has the benefit of injecting the majority of actual SEEs without risk of something being missed, and without the risk of simulation artifacts misleading the SEE investigation.
Stochastic Robust Mathematical Programming Model for Power System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Optimisation of MSW collection routes for minimum fuel consumption using 3D GIS modelling.
Tavares, G; Zsigraiova, Z; Semiao, V; Carvalho, M G
2009-03-01
Collection of municipal solid waste (MSW) may account for more than 70% of the total waste management budget, most of which is for fuel costs. It is therefore crucial to optimise the routing network used for waste collection and transportation. This paper proposes the use of geographical information systems (GIS) 3D route modelling software for waste collection and transportation, which adds one more degree of freedom to the system and allows driving routes to be optimised for minimum fuel consumption. The model takes into account the effects of road inclination and vehicle weight. It is applied to two different cases: routing waste collection vehicles in the city of Praia, the capital of Cape Verde, and routing the transport of waste from different municipalities of Santiago Island to an incineration plant. For the Praia city region, the 3D model that minimised fuel consumption yielded cost savings of 8% as compared with an approach that simply calculated the shortest 3D route. Remarkably, this was true despite the fact that the GIS-recommended fuel reduction route was actually 1.8% longer than the shortest possible travel distance. For the Santiago Island case, the difference was even more significant: a 12% fuel reduction for a similar total travel distance. These figures indicate the importance of considering both the relief of the terrain and fuel consumption in selecting a suitable cost function to optimise vehicle routing.
Minimum entropy deconvolution and blind equalisation
NASA Technical Reports Server (NTRS)
Satorius, E. H.; Mulligan, J. J.
1992-01-01
Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.
NASA Technical Reports Server (NTRS)
Coakley, P.; Kitterer, B.; Treadaway, M.
1982-01-01
Charging and discharging characteristics of dielectric samples exposed to 1-25 keV and 25-100 keV electrons in a laboratory environment are reported. The materials examined comprised OSR, Mylar, Kapton, perforated Kapton, and Alphaquartz, serving as models for materials employed on spacecraft in geosynchronous orbit. The tests were performed in a vacuum chamber with electron guns whose beams were rastered over the entire surface of the planar samples. The specimens were examined in low-impedance-grounded, high-impedance-grounded, and isolated configurations. The worst-case and average peak discharge currents were observed to be independent of the incident electron energy, the time-dependent changes in the worst case discharge peak current were independent of the energy, and predischarge surface potentials are negligibly dependent on incident monoenergetic electrons.
Worst-case space radiation environments for geocentric missions
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Seltzer, S. M.
1976-01-01
Worst-case possible annual radiation fluences of energetic charged particles in the terrestrial space environment, and the resultant depth-dose distributions in aluminum, were calculated in order to establish absolute upper limits to the radiation exposure of spacecraft in geocentric orbits. The results are a concise set of data intended to aid in the determination of the feasibility of a particular mission. The data may further serve as guidelines in the evaluation of standard spacecraft components. Calculations were performed for each significant particle species populating or visiting the magnetosphere, on the basis of volume occupied by or accessible to the respective species. Thus, magnetospheric space was divided into five distinct regions using the magnetic shell parameter L, which gives the approximate geocentric distance (in earth radii) of a field line's equatorial intersect.
Detecting GNSS spoofing attacks using INS coupling
NASA Astrophysics Data System (ADS)
Tanil, Cagatay
Vulnerability of Global Navigation Satellite Systems (GNSS) users to signal spoofing is a critical threat to positioning integrity, especially in aviation applications, where the consequences are potentially catastrophic. In response, this research describes and evaluates a new approach to directly detect spoofing using integrated Inertial Navigation Systems (INS) and fault detection concepts based on integrity monitoring. The monitors developed here can be implemented into positioning systems using INS/GNSS integration via 1) tightly-coupled, 2) loosely-coupled, and 3) uncoupled schemes. New evaluation methods enable the statistical computation of integrity risk resulting from a worst-case spoofing attack - without needing to simulate an unmanageably large number of individual aircraft approaches. Integrity risk is an absolute measure of safety and a well-established metric in aircraft navigation. A novel closed-form solution to the worst-case time sequence of GNSS signals is derived to maximize the integrity risk for each monitor and used in the covariance analyses. This methodology tests the performance of the monitors against the most sophisticated spoofers, capable of tracking the aircraft position - for example, by means of remote tracking or onboard sensing. Another contribution is a comprehensive closed-loop model that encapsulates the vehicle and compensator (estimator and controller) dynamics. A sensitivity analysis uses this model to quantify the leveraging impact of the vehicle's dynamic responses (e.g., to wind gusts, or to autopilot's acceleration commands) on the monitor's detection capability. The performance of the monitors is evaluated for two safety-critical terminal area navigation applications: 1) autonomous shipboard landing and 2) Boeing 747 (B747) landing assisted with Ground Based Augmentation Systems (GBAS). It is demonstrated that for both systems, the monitors are capable of meeting the most stringent precision approach and landing integrity requirements of the International Civil Aviation Organization (ICAO). The statistical evaluation methods developed here can be used as a baseline procedure in the Federal Aviation Administration's (FAA) certification of spoof-free navigation systems. The final contribution is an investigation of INS sensor quality on detection performance. This determines the minimum sensor requirements to perform standalone GNSS positioning in general en route applications with guaranteed spoofing detection integrity.
NASA Astrophysics Data System (ADS)
Wyss, Max
2013-04-01
An earthquake of M6.3 killed 309 people in L'Aquila, Italy, on 6 April 2011. Subsequently, a judge in L'Aquila convicted seven who had participated in an emergency meeting on March 30, assessing the probability of a major event to follow the ongoing earthquake swarm. The sentence was six years in prison, a combine fine of 2 million Euros, loss of job, loss of retirement rent, and lawyer's costs. The judge followed the prosecution's accusation that the review by the Commission of Great Risks had conveyed a false sense of security to the population, which consequently did not take their usual precautionary measures before the deadly earthquake. He did not consider the facts that (1) one of the convicted was not a member of the commission and had merrily obeyed orders to bring the latest seismological facts to the discussion, (2) another was an engineer who was not required to have any expertise regarding the probability of earthquakes, (3) and two others were seismologists not invited to speak to the public at a TV interview and a press conference. This exaggerated judgment was the consequence of an uproar in the population, who felt misinformed and even mislead. Faced with a population worried by an earthquake swarm, the head of the Italian Civil Defense is on record ordering that the population be calmed, and the vice head executed this order in a TV interview one hour before the meeting of the Commission by stating "the scientific community continues to tell me that the situation is favorable and that there is a discharge of energy." The first lesson to be learned is that communications to the public about earthquake hazard and risk must not be left in the hands of someone who has gross misunderstandings about seismology. They must be carefully prepared by experts. The more significant lesson is that the approach to calm the population and the standard probabilistic hazard and risk assessment, as practiced by GSHAP, are misleading. The later has been criticized as being incorrect for scientific reasons and here I argue that it is also ineffective for psychological reasons. Instead of calming the people or by underestimating the hazard in strongly active areas by the GSHAP approach, they should be told quantitatively the consequences of the reasonably worst case and be motivated to prepare for it, whether or not it may hit the present or the next generation. In a worst case scenario for L'Aquila, the number of expected fatalities and injured should have been calculated for an event in the range of M6.5 to M7, as I did for a civil defense exercise in Umbria, Italy. With the prospect that approximately 500 people may die in an earthquake in the immediate or distant future, some residents might have built themselves an earthquake closet (similar to a simple tornado shelter) in a corner of their apartment, into which they might have dashed to safety at the onset of the P-wave before the destructive S-wave arrived. I conclude that in earthquake prone areas quantitative loss estimates due to a reasonable worst case earthquake should replace probabilistic hazard and risk estimates. This is a service, which experts owe the community. Insurance companies and academics may still find use for probabilistic estimates of losses, especially in areas of low seismic hazard, where the worst case scenario approach is less appropriate.
Sears, Erika Davis; Burke, James F; Davis, Matthew M; Chung, Kevin C
2013-03-01
The purpose of this study was to (1) understand national variation in delay of emergency procedures in patients with open tibial fracture at the hospital level and (2) compare length of stay and cost in patients cared for at the best- and worst-performing hospitals for delay. The authors retrospectively analyzed the 2003 to 2009 Nationwide Inpatient Sample. Adult patients with open tibial fracture were included. Hospital probability of delay in performing emergency procedures beyond the day of admission was calculated. Multilevel linear regression random-effects models were created to evaluate the relationship between the treating hospital's tendency for delay (in quartiles) and the log-transformed outcomes of length of stay and cost. The final sample included 7029 patients from 332 hospitals. Patients treated at hospitals in the fourth (worst) quartile for delay were estimated to have 12 percent (95 percent CI, 2 to 21 percent) higher cost compared with patients treated at hospitals in the first quartile. In addition, patients treated at hospitals in the fourth quartile had an estimated 11 percent (95 percent CI, 4 to 17 percent) longer length of stay compared with patients treated at hospitals in the first quartile. Patients with open tibial fracture treated at hospitals with more timely initiation of surgical care had lower cost and shorter length of stay than patients treated at hospitals with less timely initiation of care. Policies directed toward mitigating variation in care may reduce unnecessary waste.
Hepatitis Aand E Co-Infection with Worst Outcome.
Saeed, Anjum; Cheema, Huma Arshad; Assiri, Asaad
2016-06-01
Infections are still a major problem in the developing countries like Pakistan because of poor sewage disposal and economic restraints. Acute viral hepatitis like Aand E are not uncommon in pediatric age group because of unhygienic food handling and poor sewage disposal, but majority recovers well without any complications. Co-infections are rare occurrences and physicians need to be well aware while managing such conditions to avoid worst outcome. Co-infection with hepatitis Aand E is reported occasionally in the literature, however, other concurrent infections such as hepatitis A with Salmonellaand hepatotropic viruses like viral hepatitis B and C are present in the literature. Co-infections should be kept in consideration when someone presents with atypical symptoms or unusual disease course like this presented case. We report here a girl child who had acute hepatitis A and E concurrent infections and presented with hepatic encephalopathy and had worst outcome, despite all the supportive measures being taken.
Sizing procedures for sun-tracking PV system with batteries
NASA Astrophysics Data System (ADS)
Nezih Gerek, Ömer; Başaran Filik, Ümmühan; Filik, Tansu
2017-11-01
Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system) for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015-2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure) are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.
Implementation of School Health Promotion: Consequences for Professional Assistance
ERIC Educational Resources Information Center
Boot, N. M. W. M.; de Vries, N. K.
2012-01-01
Purpose: This case study aimed to examine the factors influencing the implementation of health promotion (HP) policies and programs in secondary schools and the consequences for professional assistance. Design/methodology/approach: Group interviews were held in two schools that represented the best and worst case of implementation of a health…
Compression in the Superintendent Ranks
ERIC Educational Resources Information Center
Saron, Bradford G.; Birchbauer, Louis J.
2011-01-01
Sadly, the fiscal condition of school systems now not only is troublesome, but in some cases has surpassed all expectations for the worst-case scenario. Among the states, one common response is to drop funding for public education to inadequate levels, leading to permanent program cuts, school closures, staff layoffs, district dissolutions and…
Cost-effectiveness of diagnostic for malaria in Extra-Amazon Region, Brazil
2012-01-01
Background Rapid diagnostic tests (RDT) for malaria have been demonstrated to be effective and they should replace microscopy in certain areas. Method The cost-effectiveness of five RDT and thick smear microscopy was estimated and compared. Data were collected on Brazilian Extra-Amazon Region. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, laboratory suppliers and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2010. Two costing methods were produced, based on exclusive-use microscopy or shared-use microscopy. The results were expressed in costs per adequately diagnosed cases in 2010 U.S. dollars. One-way sensitivity analysis was performed considering key model parameters. Results In the cost-effectiveness analysis with exclusive-use microscopy, the RDT CareStart™ was the most cost-effective diagnostic strategy. Microscopy was the most expensive and most effective, with an additional case adequately diagnosed by microscopy costing US$ 35,550.00 in relation to CareStart™. In opposite, in the cost-effectiveness analysis with shared-use microscopy, the thick smear was extremely cost-effective. Introducing into the analytic model with shared-use microscopy a probability for individual access to the diagnosis, assuming a probability of 100% of access for a public health system user to any RDT and, hypothetically, of 85% of access to microscopy, this test saw its effectiveness reduced and was dominated by the RDT CareStart™. Conclusion The analysis of cost-effectiveness of malaria diagnosis technologies in the Brazilian Extra-Amazon Region depends on the exclusive or shared use of the microscopy. Following the assumptions of this study, shared-use microscopy would be the most cost-effective strategy of the six technologies evaluated. However, if used exclusively for diagnosing malaria, microscopy would be the worst use of resources. Microscopy would not be the most cost-effective strategy, even when structure is shared with other programmes, when the probability of a patient having access to it was reduced. Under these circumstances, the RDT CareStart™ would be the most cost-effective strategy. PMID:23176717
Redondo-González, O; Tenías-Burillo, J M; Ruiz-Gonzalo, J
2017-07-01
Vaccination has reduced rotavirus hospitalizations by 25% in European regions with low-moderate vaccine availability. We aimed to quantify the reduction in hospital costs after the longest period in which Rotarix® and Rotateq® were simultaneously commercially available in Spain. Cases, length of stay (LOS), and diagnosis-related groups (DRGs) were retrieved from the Minimum Basic Data Set. Healthcare expenditure was estimated through the cost accounting system Gescot®. DRGs were clustered: I, non-bacterial gastroenteritis with complications; II, without complications; III, requiring surgical/other procedures or neonatal cases (highest DRG weights). Comparisons between pre (2003-2005)- and post-vaccine (2007-2009) hospital stays and costs by DRG group were made. Rotaviruses were the most common agents of specific-coded gastroenteritis (N = 1657/5012). LOS and extended LOS of rotaviruses fell significantly in 2007-2009 (β-coefficient = -0·43, 95% confidence intervals (95% CI) -0·68 to -0·17; and odds ratio 0·62, 95% CI 0·50-0·76, respectively). Overall, costs attributable to rotavirus hospitalizations fell approximately €244 per patient (95% CI -365 to -123); the decrease in DRG group III was €2269 per patient (95% CI -4098 to -380). We concluded modest savings in hospital costs, largely attributable to cases with higher DRG weights, and a faster recovery. A universal rotavirus vaccination program deserves being re-evaluated, regarding its potential high impact on both at-risk children and societal costs.
Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka
2018-06-01
The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Aero/structural tailoring of engine blades (AERO/STAEBL)
NASA Technical Reports Server (NTRS)
Brown, K. W.
1988-01-01
This report describes the Aero/Structural Tailoring of Engine Blades (AERO/STAEBL) program, which is a computer code used to perform engine fan and compressor blade aero/structural numerical optimizations. These optimizations seek a blade design of minimum operating cost that satisfies realistic blade design constraints. This report documents the overall program (i.e., input, optimization procedures, approximate analyses) and also provides a detailed description of the validation test cases.
2005-01-01
MUSIC.consultant, Doug Lewis, host of WVGN’s Doug Lewis Show, cddick@viaccess.net Being For The Benefit of Mr. Kite, Beatles , Writer, lead vocal...and in the case of a repeater failure. Tested on both VHF and UHF bands, one person can setup in 15 minutes. Cost is approximately $2,000.00. Minimum...This was originally written as a term paper for the Center for Homeland Defense and Security at the Naval Postgraduate
NASA Astrophysics Data System (ADS)
Ngastiti, P. T. B.; Surarso, Bayu; Sutimin
2018-05-01
Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.
Chadha, V K; Sebastian, George; Kumar, P
2016-01-01
We undertook cost analysis for diagnosis of pulmonary tuberculosis (PTB) using present algorithm under Revised National Tuberculosis Control programme and using Xpert MTB/RIF (Xpert) as frontline test or in conjunction with smear microscopy and/or chest radiography. Costs were estimated for different strategies: (A) present algorithm involving sputum smear examination followed by antibiotic trial in smear negative patients, repeat smear examination (RE) if symptoms continue and chest radiography if RE negative; (B) direct Xpert; (C) smear microscopy followed by Xpert in smear negative patients; (D) radiography followed by Xpert in those having abnormal pulmonary shadows; and (E) smear examination followed by radiography among smear negative patients and Xpert in presence of abnormal pulmonary shadow. Cost to program was estimated lowest with Strategy A and highest with Strategy B. Compared to the latter, program cost reduces by 7%, 4.5%, and 17.4% by strategies C, D, and E, respectively. Cost to the group of individuals with presumptive PTB and their attendants is significantly higher for Strategy A compared to other four strategies. Among the latter, the patients' cost was minimum with Strategy B and maximum with Strategy C. Program cost per case diagnosed was lowest by Strategy A and highest by Strategy B. Patient cost per case diagnosed was highest by Strategy A and lowest by Strategy B. Using Xpert, Strategy E had the lowest program as well as overall cost per case diagnosed. Strategy E may be chosen for diagnosis of PTB. When resources would no longer be a constraint, direct Xpert would reduce costs incurred by the patients. Copyright © 2016 Tuberculosis Association of India. Published by Elsevier B.V. All rights reserved.
Global Norms and Local Politics: Uses and Abuses of Education Gender Quotas in Tajikistan
ERIC Educational Resources Information Center
Silova, Iveta; Abdushukurova, Tatiana
2009-01-01
In Central Asia, the post-Soviet transformation period has been accompanied by significant economic and social costs, including the widening of the gender gaps in politics, economy and the social sphere. Tajikistan, which receives the largest amount of international aid and has the worst record of gender inequity in Central Asia, has quickly…
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Csank, Jeffrey
2015-01-01
Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95 percent response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible controller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed-loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76 percent. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76 percent. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Csank, Jeffrey T.
2015-01-01
Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95% response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible con- troller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed- loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76%. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76%. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.
Hospital Costs of Foreign Non-Resident Patients: A Comparative Analysis in Catalonia, Spain.
Arroyo-Borrell, Elena; Renart-Vicens, Gemma; Saez, Marc; Carreras, Marc
2017-09-14
Although patient mobility has increased over the world, in Europe there is a lack of empirical studies. The aim of the study was to compare foreign non-resident patients versus domestic patients for the particular Catalan case, focusing on patient characteristics, hospitalisation costs and differences in costs depending on the typology of the hospital they are treated. We used data from the 2012 Minimum Basic Data Set-Acute Care hospitals (CMBD-HA) in Catalonia. We matched two case-control groups: first, foreign non-resident patients versus domestic patients and, second, foreign non-resident patients treated by Regional Public Hospitals versus other type of hospitals. Hospitalisation costs were modelled using a GLM Gamma with a log-link. Our results show that foreign non-resident patients were significantly less costly than domestic patients (12% cheaper). Our findings also suggested differences in the characteristics of foreign non-resident patients using Regional Public Hospitals or other kinds of hospitals although we did not observe significant differences in the healthcare costs. Nevertheless, women, 15-24 and 35-44 years old patients and the days of stay were less costly in Regional Public Hospitals. In general, acute hospitalizations of foreign non-resident patients while they are on holiday cost substantially less than domestic patients. The typology of hospital is not found to be a relevant factor influencing costs.
Hospital Costs of Foreign Non-Resident Patients: A Comparative Analysis in Catalonia, Spain
Arroyo-Borrell, Elena; Renart-Vicens, Gemma; Saez, Marc
2017-01-01
Although patient mobility has increased over the world, in Europe there is a lack of empirical studies. The aim of the study was to compare foreign non-resident patients versus domestic patients for the particular Catalan case, focusing on patient characteristics, hospitalisation costs and differences in costs depending on the typology of the hospital they are treated. We used data from the 2012 Minimum Basic Data Set-Acute Care hospitals (CMBD-HA) in Catalonia. We matched two case-control groups: first, foreign non-resident patients versus domestic patients and, second, foreign non-resident patients treated by Regional Public Hospitals versus other type of hospitals. Hospitalisation costs were modelled using a GLM Gamma with a log-link. Our results show that foreign non-resident patients were significantly less costly than domestic patients (12% cheaper). Our findings also suggested differences in the characteristics of foreign non-resident patients using Regional Public Hospitals or other kinds of hospitals although we did not observe significant differences in the healthcare costs. Nevertheless, women, 15–24 and 35–44 years old patients and the days of stay were less costly in Regional Public Hospitals. In general, acute hospitalizations of foreign non-resident patients while they are on holiday cost substantially less than domestic patients. The typology of hospital is not found to be a relevant factor influencing costs. PMID:28906459
Deganello, A; Gitti, G; Parrinello, G; Muratori, E; Larotonda, G; Gallo, O
2013-12-01
Reconstructive surgery of the head and neck region has undergone tremendous advancement over the past three decades, and the success rate of free tissue transfers has risen to greater than 95%. It must always be considered that not all patients are ideal candidates for free flap reconstruction, and also that not every defect strictly requires a free flap transfer to achieve good functional results. At our institution, free flap reconstruction is first choice, although we use pedicled alternative flaps for most weak patients suffering from severe comorbidities, and for pretreated patients presenting a second primary or a recurrent cancer. From July 2006 to May 2010, 54 consecutive patients underwent soft tissue reconstruction of oral cavity and oropharyngeal defects. We divided the cohort in three groups: Group 1 (G1): 16 patients in good general conditions that received free radial forearm flap reconstruction; Group 2 (G2): 18 high-risk patients that received a reconstruction with infrahyoid flap; Group 3 (G3): 20 patients that received temporal flap (10 cases) or pectoral flap (10 cases) reconstruction. We must highlight that pedicled alternative flaps were used in elderly, unfavourable and weak patients, where usually the medical costs tend to rise rather than decrease. We compared the healthcare costs of the three groups, calculating real costs in each group from review of medical records and operating room registers, and calculating the corresponding DRG system reimbursement. For real costs, we found a statistically significant difference among groups: in G1 the average total cost per patient was € 22,924, in G2 it was € 18,037 and in G3 was € 19,872 (p = 0.043). The amount of the refund, based on the DRG system, was € 7,650 per patient, independently of the type of surgery. Our analysis shows that the use of alternative non-microvascular techniques, in high-risk patients, is functionally and oncologically sound, and can even produce a cost savings. In particular, the infrahyoid flap (G2) ensures excellent functional results, accompanied by the best economic savings in the worst group of patients. Our data reflect a large disconnection between the DRG system and actual treatment costs.
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-01-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Conclusions: Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. (Hepatology 2015;61:1174–1182) PMID:25482139
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-04-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.
[Socio-demographic and food insecurity characteristics of soup-kitchen users in Brazil].
Godoy, Kátia Cruz; Sávio, Karin Eleonora Oliveira; Akutsu, Rita de Cássia; Gubert, Muriel Bauermann; Botelho, Raquel Braz Assunção
2014-06-01
This study aimed to characterize users of a government soup-kitchen program and the association with family food insecurity, using a cross-sectional design and random sample of 1,637 soup-kitchen users. The study used a questionnaire with socioeconomic variables and the Brazilian Food Insecurity Scale, and measured weight and height. The chi-square test was applied, and the crude and adjusted prevalence ratios (PR) were calculated using Poisson regression. Prevalent characteristics included per capita income ranging from one-half to one minimum wage (35.1%), complete middle school (39.8%), and food security (59.4%). Users in the North of Brazil showed the worst data: incomplete primary school (39.8%), per capita income up to one-half the minimum wage (50.8%), and food insecurity (55.5%). Prevalence ratios for food insecurity were higher among users with per capita income up to one-fourth the minimum wage (p < 0.05). Income was the only variable that remained associated with higher prevalence of food insecurity in the adjusted PR. Knowing the characteristics of soup-kitchen users with food insecurity can help orient the program's work, location, and operations.
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2013 CFR
2013-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2012 CFR
2012-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2014 CFR
2014-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
Adaptive Attitude Control of the Crew Launch Vehicle
NASA Technical Reports Server (NTRS)
Muse, Jonathan
2010-01-01
An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.
Optimal cooperative time-fixed impulsive rendezvous
NASA Technical Reports Server (NTRS)
Mirfakhraie, Koorosh; Conway, Bruce A.; Prussing, John E.
1988-01-01
A method has been developed for determining optimal, i.e., minimum fuel, trajectories for the fixed-time cooperative rendezvous of two spacecraft. The method presently assumes that the vehicles perform a total of three impulsive maneuvers with each vehicle being active, that is, making at least one maneuver. The cost of a feasible 'reference' trajectory is improved by an optimizer which uses an analytical gradient developed using primer vector theory and a new solution for the optimal terminal (rendezvous) maneuver. Results are presented for a large number of cases in which the initial orbits of both vehicles are circular but in which the initial positions of the vehicles and the allotted time for rendezvous are varied. In general, the cost of the cooperative rendezvous is less than that of rendezvous with one vehicle passive. Further improvement in cost may be obtained in the future when additional, i.e., midcourse, impulses are allowed and inserted as indicated for some cases by the primer vector histories which are generated by the program.
Optimal shielding design for minimum materials cost or mass
Woolley, Robert D.
2015-12-02
The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less
Olofsson, Johanna; Barta, Zsolt; Börjesson, Pål; Wallberg, Ola
2017-01-01
Cellulase enzymes have been reported to contribute with a significant share of the total costs and greenhouse gas emissions of lignocellulosic ethanol production today. A potential future alternative to purchasing enzymes from an off-site manufacturer is to integrate enzyme and ethanol production, using microorganisms and part of the lignocellulosic material as feedstock for enzymes. This study modelled two such integrated process designs for ethanol from logging residues from spruce production, and compared it to an off-site case based on existing data regarding purchased enzymes. Greenhouse gas emissions and primary energy balances were studied in a life-cycle assessment, and cost performance in a techno-economic analysis. The base case scenario suggests that greenhouse gas emissions per MJ of ethanol could be significantly lower in the integrated cases than in the off-site case. However, the difference between the integrated and off-site cases is reduced with alternative assumptions regarding enzyme dosage and the environmental impact of the purchased enzymes. The comparison of primary energy balances did not show any significant difference between the cases. The minimum ethanol selling price, to reach break-even costs, was from 0.568 to 0.622 EUR L -1 for the integrated cases, as compared to 0.581 EUR L -1 for the off-site case. An integrated process design could reduce greenhouse gas emissions from lignocellulose-based ethanol production, and the cost of an integrated process could be comparable to purchasing enzymes produced off-site. This study focused on the environmental and economic assessment of an integrated process, and in order to strengthen the comparison to the off-site case, more detailed and updated data regarding industrial off-site enzyme production are especially important.
Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome
Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack
2016-01-01
Background: Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an ‘overall' or ‘worst' GS in biopsies series should be used. Methods: Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Results: Using both ‘worst' and ‘overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. Conclusions: This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the ‘worst' grade is a valid prognostic measure. PMID:27100731
Tidal extension and sea-level rise: recommendations for a research agenda
Ensign, Scott H.; Noe, Gregory
2018-01-01
Sea-level rise is pushing freshwater tides upstream into formerly non-tidal rivers. This tidal extension may increase the area of tidal freshwater ecosystems and offset loss of ecosystem functions due to salinization downstream. Without considering how gains in ecosystem functions could offset losses, landscape-scale assessments of ecosystem functions may be biased toward worst-case scenarios of loss. To stimulate research on this concept, we address three fundamental questions about tidal extension: Where will tidal extension be most evident, and can we measure it? What ecosystem functions are influenced by tidal extension, and how can we measure them? How do watershed processes, climate change, and tidal extension interact to affect ecosystem functions? Our preliminary answers lead to recommendations that will advance tidal extension research, enable better predictions of the impacts of sea-level rise, and help balance the landscape-scale benefits of ecosystem function with costs of response.
Continuity planning for workplace infectious diseases.
Welch, Nancy; Miller, Pamela Blair; Engle, Lisa
2016-01-01
Traditionally, business continuity plans prepare for worst-case scenarios; people plan for the exception rather than the common. Plans focus on infrastructure damage and recovery wrought by such disasters as hurricanes, terrorist events or tornadoes. Yet, another very real threat looms present every day, every season and can strike without warning, wreaking havoc on the major asset -- human capital. Each year, millions of dollars are lost in productivity, healthcare costs, absenteeism and services due to infectious, communicable diseases. Sound preventive risk management and recovery strategies can avert this annual decimation of staff and ensure continuous business operation. This paper will present a strong economic justification for the recognition, prevention and mitigation of communicable diseases as a routine part of continuity planning for every business. Recommendations will also be provided for environmental/engineering controls as well as personnel policies that address employee and customer protection, supply chain contacts and potential legal issues.
Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements
NASA Astrophysics Data System (ADS)
Appel, Pontus
2005-01-01
For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
On optimal current patterns for electrical impedance tomography.
Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D
2005-02-01
We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.
You can use this free software program to complete the Off-site Consequence Analyses (both worst case scenarios and alternative scenarios) required under the Risk Management Program rule, so that you don't have to do calculations by hand.
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, J.V. III; Cramer, S.N.; Knight, J.R.
1980-09-01
Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less
NASA Technical Reports Server (NTRS)
Olson, S. L.
2004-01-01
NASA's current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.
NASA Technical Reports Server (NTRS)
Olson, S. L.
2004-01-01
NASA s current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1[1]). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.
LANDSAT-D MSS/TM tuned orbital jitter analysis model LDS900
NASA Technical Reports Server (NTRS)
Pollak, T. E.
1981-01-01
The final LANDSAT-D orbital dynamic math model (LSD900), comprised of all test validated substructures, was used to evaluate the jitter response of the MSS/TM experiments. A dynamic forced response analysis was performed at both the MSS and TM locations on all structural modes considered (thru 200 Hz). The analysis determined the roll angular response of the MSS/TM experiments to improve excitation generated by component operation. Cross axis and cross experiment responses were also calculated. The excitations were analytically represented by seven and nine term Fourier series approximations, for the MSS and TM experiment respectively, which enabled linear harmonic solution techniques to be applied to response calculations. Single worst case jitter was estimated by variations of the eigenvalue spectrum of model LSD 900. The probability of any worst case mode occurrence was investigated.
NASA Astrophysics Data System (ADS)
Harabuchi, Yu; Taketsugu, Tetsuya; Maeda, Satoshi
2017-04-01
We report a new approach to search for structures of minimum energy conical intersection (MECIs) automatically. Gradient projection (GP) method and single component artificial force induced reaction (SC-AFIR) method were combined in the present approach. As case studies, MECIs of benzene and naphthalene between their ground and first excited singlet electronic states (S0/S1-MECIs) were explored. All S0/S1-MECIs reported previously were obtained automatically. Furthermore, the number of force calculations was reduced compared to the one required in the previous search. Improved convergence in a step in which various geometrical displacements are induced by SC-AFIR would contribute to the cost reduction.
An Alaskan Theater Airlift Model.
1982-02-19
overt attack on American soil . In any case, such a reaotion represents the worst-case scenario In that theater forces would be denied the advantages of...NNSETNTAFE,SS(l06), USL (100), 7 TNET,THOV,1X(100) REAL A,CHKTIN INTEGER ORIC,DEST,ISCTMP,WXFLG,ALLW,T(RT,ZPTR,ZONE, * FTNFLG.WX,ZONLST(150) DATA ZNSI
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-15
... Service (NPS) for the Florida leafwing and the pine rockland ecosystem, in general. Sea Level Rise... habitat. In the best case scenario, which assumes low sea level rise, high financial resources, proactive... human population. In the worst case scenario, which assumes high sea level rise, low financial resources...
A Different Call to Arms: Women in the Core of the Communications Revolution.
ERIC Educational Resources Information Center
Rush, Ramona R.
A "best case" model for the role of women in the postindustrial communications era predicts positive leadership roles based on the preindustrial work characteristics of cooperation and consensus. A "worst case" model finds women entrepreneurs succumbing to the competitive male ethos and extracting the maximum amount of work…
Global Value Chain and Manufacturing Analysis on Geothermal Power Plant Turbines: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akar, Sertac; Augustine, Chad R; Kurup, Parthiv
The global geothermal electricity market has significantly grown over the last decade and is expected to reach a total installed capacity of 18.4 GWe in 2021 (GEA, 2016). Currently, geothermal project developers customize the size of the power plant to fit the resource being developed. In particular, the turbine is designed and sized to optimize efficiency and resource utilization for electricity production; most often, other power plant components are then chosen to complement the turbine design. These custom turbine designs demand one-off manufacturing processes, which result in higher manufacturing setup costs, longer lead-times, and higher capital costs overall in comparisonmore » to larger-volume line manufacturing processes. In contrast, turbines produced in standard increments, manufactured in larger volumes, could result in lower costs per turbine. This study focuses on analysis of the global supply chain and manufacturing costs for Organic Rankine Cycle (ORC) turboexpanders and steam turbines used in geothermal power plants. In this study, we developed a manufacturing cost model to identify requirements for equipment, facilities, raw materials, and labor. We analyzed three different cases 1) 1 MWe geothermal ORC turboexpander 2) 5 MWe ORC turboexpander and 3) 20 MWe geothermal steam turbine, and calculated the cost of manufacturing the major components, such as the impellers/blades, shaft/rotor, nozzles, inlet guide lanes, disks, and casings. Then we used discounted cash flow (DCF) analysis to calculate the minimum sustainable price (MSP). MSP is the minimum price that a company must sell its product for in order to pay back the capital and operating expenses during the plant lifetime (CEMAC, 2017). The results showed that MSP could highly vary between 893 dollar/kW and 30 dollar/kW based on turbine size, standardization and volume of manufacturing. The analysis also showed that the economy of scale applies both to the size of the turbine and the number manufactured in a single run. Sensitivity analysis indicated these savings come largely from reduced labor costs for design and engineering and manufacturing setup.« less
Global Value Chain and Manufacturing Analysis on Geothermal Power Plant Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akar, Sertac; Augustine, Chad R; Kurup, Parthiv
The global geothermal electricity market has significantly grown over the last decade and is expected to reach a total installed capacity of 18.4 GWe in 2021 (GEA, 2016). Currently, geothermal project developers customize the size of the power plant to fit the resource being developed. In particular, the turbine is designed and sized to optimize efficiency and resource utilization for electricity production; most often, other power plant components are then chosen to complement the turbine design. These custom turbine designs demand one-off manufacturing processes, which result in higher manufacturing setup costs, longer lead-times, and higher capital costs overall in comparisonmore » to larger-volume line manufacturing processes. In contrast, turbines produced in standard increments, manufactured in larger volumes, could result in lower costs per turbine. This study focuses on analysis of the global supply chain and manufacturing costs for Organic Rankine Cycle (ORC) turboexpanders and steam turbines used in geothermal power plants. In this study, we developed a manufacturing cost model to identify requirements for equipment, facilities, raw materials, and labor. We analyzed three different cases 1) 1 MWe geothermal ORC turboexpander 2) 5 MWe ORC turboexpander and 3) 20 MWe geothermal steam turbine, and calculated the cost of manufacturing the major components, such as the impellers/blades, shaft/rotor, nozzles, inlet guide lanes, disks, and casings. Then we used discounted cash flow (DCF) analysis to calculate the minimum sustainable price (MSP). MSP is the minimum price that a company must sell its product for in order to pay back the capital and operating expenses during the plant lifetime (CEMAC, 2017). The results showed that MSP could highly vary between 893 dollar/kW and 30 dollar/kW based on turbine size, standardization and volume of manufacturing. The analysis also showed that the economy of scale applies both to the size of the turbine and the number manufactured in a single run. Sensitivity analysis indicated these savings come largely from reduced labor costs for design and engineering and manufacturing setup.« less
Design of a blade stiffened composite panel by a genetic algorithm
NASA Technical Reports Server (NTRS)
Nagendra, S.; Haftka, R. T.; Gurdal, Z.
1993-01-01
Genetic algorithms (GAs) readily handle discrete problems, and can be made to generate many optima, as is presently illustrated for the case of design for minimum-weight stiffened panels with buckling constraints. The GA discrete design procedure proved superior to extant alternatives for both stiffened panels with cutouts and without cutouts. High computational costs are, however, associated with this discrete design approach at the current level of its development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, R.S.
1989-06-01
For a vehicle operating across arbitrarily-contoured terrain, finding the most fuel-efficient route between two points can be viewed as a high-level global path-planning problem with traversal costs and stability dependent on the direction of travel (anisotropic). The problem assumes a two-dimensional polygonal map of homogeneous cost regions for terrain representation constructed from elevation information. The anisotropic energy cost of vehicle motion has a non-braking component dependent on horizontal distance, a braking component dependent on vertical distance, and a constant path-independent component. The behavior of minimum-energy paths is then proved to be restricted to a small, but optimal set of traversalmore » types. An optimal-path-planning algorithm, using a heuristic search technique, reduces the infinite number of paths between the start and goal points to a finite number by generating sequences of goal-feasible window lists from analyzing the polygonal map and applying pruning criteria. The pruning criteria consist of visibility analysis, heading analysis, and region-boundary constraints. Each goal-feasible window lists specifies an associated convex optimization problem, and the best of all locally-optimal paths through the goal-feasible window lists is the globally-optimal path. These ideas have been implemented in a computer program, with results showing considerably better performance than the exponential average-case behavior predicted.« less
Biobjective planning of GEO debris removal mission with multiple servicing spacecrafts
NASA Astrophysics Data System (ADS)
Jing, Yu; Chen, Xiao-qian; Chen, Li-hu
2014-12-01
The mission planning of GEO debris removal with multiple servicing spacecrafts (SScs) is studied in this paper. Specifically, the SScs are considered to be initially on the GEO belt, and they should rendezvous with debris of different orbital slots and different inclinations, remove them to the graveyard orbit and finally return to their initial locations. Three key problems should be resolved here: task assignment, mission sequence planning and transfer trajectory optimization for each SSc. The minimum-cost, two-impulse phasing maneuver is used for each rendezvous. The objective is to find a set of optimal planning schemes with minimum fuel cost and travel duration. Considering this mission as a hybrid optimal control problem, a mathematical model is proposed. A modified multi-objective particle swarm optimization is employed to address the model. Numerous examples are carried out to demonstrate the effectiveness of the model and solution method. In this paper, single-SSc and multiple-SSc scenarios with the same amount of fuel are compared. Numerous experiments indicate that for a definite GEO debris removal mission, that which alternative (single-SSc or multiple-SSc) is better (cost less fuel and consume less travel time) is determined by many factors. Although in some cases, multiple-SSc scenarios may perform worse than single-SSc scenarios, the extra costs are considered worth the gain in mission safety and robustness.
CO2 Capture from the Air: Technology Assessment and Implications for Climate Policy
NASA Astrophysics Data System (ADS)
Keith, D. W.
2002-05-01
It is physically possible to capture CO2 directly from the air and immobilize it in geological structures. Today, there are no large-scale technologies that achieve air capture at reasonable cost. Yet, strong arguments suggest that it will comparatively easy to develop practical air capture technologies on the timescales relevant to climate policy [1]. This paper first analyzes the cost of air capture and then assesses the implications for climate policy. We first analyze the lower bound on the cost needed for air capture, describing the thermodynamic and physical limits to the use of energy and land. We then compare the costs of air capture to the cost of capture from combustion exhaust streams. While the intrinsic minimum energy requirement is larger for air capture, we argue that air capture has important structural advantages, such as the reduction of transport costs and the larger potential for economies of scale. These advantages suggest that, in the long-run air capture be competitive with other methods of achieving deep emissions reductions. We provide a preliminary engineering-economic analysis of an air capture system based on CaO to CaCO3 chemical looping [1]. We analyze the possibility of doing the calcination in a modified pressurized fluidized bed combustor (PFBC) burning coal in a CO2 rich atmosphere with oxygen supplied by an air separation unit. The CaCO3-to-coal ratio would be ~2:1 and the system would be nearly thermally neutral. PFBC systems have been demonstrated at capacities of over 100 MW. Such systems already include CaCO3 injection for sulfur control, and operate at suitable temperatures and pressures for calcination. We assess the potential to recover heat from the dissolution of CaO in order to reduce the overall energy requirements. We analyze the possibility of adapting existing large water/air heat exchangers for use as contacting systems to capture CO2 from the air using the calcium hydroxide solution. The implications of air capture for global climate policy are examined using DIAM [2], a stylized integrated assessment model. We find that air capture can fundamentally alter the temporal dynamics of global warming mitigation. The reason for this is that air capture differs from conventional mitigation in three key aspects. First, it removes emissions from any part of the economy with equal ease or difficulty, so its cost provides an absolute cap on the cost of mitigation. Second, it permits reduction in concentrations faster than the natural carbon cycle: the effects of irreversibility are thus partly alleviated. Third, because it is less coupled with the energy system, air capture may offer stronger economies of scale and smaller adjustment costs than the more conventional mitigation technologies. Air capture limits the total cost of a worst-case climate scenario. In an optimal sequential decision framework with uncertainty, existence of air capture decreases the need for near-term precautionary abatement. Like geoengineering, air capture thus poses a moral hazard. 1. S. Elliott, et al. Compensation of atmospheric CO2 buildup through engineered chemical sinkage. Geophys. Res. Let., 28:1235-1238, 2001. 2. Minh Ha-Duong, Michael J. Grubb, and Jean-Charles Hourcade. Influence of socioeconomic inertia and uncertainty on optimal CO2-emission abatement. Nature, 390: 270-274, 1997.
Special Interest: Teachers Unions and America's Public Schools
ERIC Educational Resources Information Center
Moe, Terry
2011-01-01
Why are America's public schools falling so short of the mark in educating the nation's children? Why are they organized in ineffective ways that fly in the face of common sense, to the point that it is virtually impossible to get even the worst teachers out of the classroom? And why, after more than a quarter century of costly education reform,…
The High Cost of South Carolina's Low Graduation Rate. School Choice Issues in the State
ERIC Educational Resources Information Center
Gottlob, Brian J.
2007-01-01
Research has documented a crisis in South Carolina's high school graduation rate. While state officials report a graduation rate above 70 percent, researchers from South Carolina and elsewhere place the rate just above 50 percent, with rates among minority students lower than 50 percent. South Carolina's graduation rate is the worst of all 50…
Francisco J. Escobedo; John E. Wagner; David J. Nowak; Carmen Luz De la Maza; Manuel Rodriguez; Daniel E. Crane
2008-01-01
Santiago, Chile has the distinction of having among the worst urban air pollution problems in Latin America. As part of an atmospheric pollution reduction plan, the Santiago Regional Metropolitan government defined an environmental policy goal of using urban forests to remove particulate matter less than 10 µm (PM10) in the Gran...
RMP Guidance for Offsite Consequence Analysis
Offsite consequence analysis (OCA) consists of a worst-case release scenario and alternative release scenarios. OCA is required from facilities with chemicals above threshold quantities. RMP*Comp software can be used to perform calculations described here.
DEGANELLO, A.; GITTI, G.; PARRINELLO, G.; MURATORI, E.; LAROTONDA, G.; GALLO, O.
2013-01-01
SUMMARY Reconstructive surgery of the head and neck region has undergone tremendous advancement over the past three decades, and the success rate of free tissue transfers has risen to greater than 95%. It must always be considered that not all patients are ideal candidates for free flap reconstruction, and also that not every defect strictly requires a free flap transfer to achieve good functional results. At our institution, free flap reconstruction is first choice, although we use pedicled alternative flaps for most weak patients suffering from severe comorbidities, and for pretreated patients presenting a second primary or a recurrent cancer. From July 2006 to May 2010, 54 consecutive patients underwent soft tissue reconstruction of oral cavity and oropharyngeal defects. We divided the cohort in three groups: Group 1 (G1): 16 patients in good general conditions that received free radial forearm flap reconstruction; Group 2 (G2): 18 high-risk patients that received a reconstruction with infrahyoid flap; Group 3 (G3): 20 patients that received temporal flap (10 cases) or pectoral flap (10 cases) reconstruction. We must highlight that pedicled alternative flaps were used in elderly, unfavourable and weak patients, where usually the medical costs tend to rise rather than decrease. We compared the healthcare costs of the three groups, calculating real costs in each group from review of medical records and operating room registers, and calculating the corresponding DRG system reimbursement. For real costs, we found a statistically significant difference among groups: in G1 the average total cost per patient was € 22,924, in G2 it was € 18,037 and in G3 was € 19,872 (p = 0.043). The amount of the refund, based on the DRG system, was € 7,650 per patient, independently of the type of surgery. Our analysis shows that the use of alternative non-microvascular techniques, in high-risk patients, is functionally and oncologically sound, and can even produce a cost savings. In particular, the infrahyoid flap (G2) ensures excellent functional results, accompanied by the best economic savings in the worst group of patients. Our data reflect a large disconnection between the DRG system and actual treatment costs. PMID:24376293
J. N. Kochenderfer; G. W. Wendel; H. Clay Smith
1984-01-01
A "minimum-standard" forest truck road that provides efficient and environmentally acceptable access for several forest activities is described. Cost data are presented for eight of these roads constructed in the central Appalachians. The average cost per mile excluding gravel was $8,119. The range was $5,048 to $14,424. Soil loss was measured from several...
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Guaranteed Discrete Energy Optimization on Large Protein Design Problems.
Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas
2015-12-08
In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.
Sunlight reflections from a solar power satellite on solar mirrors should not harm the eyes
NASA Technical Reports Server (NTRS)
Hyson, M. T.
1980-01-01
The potential hazard imposed by the reflection of the Sun's image by the solar power satellite (SPS) was examined. In the worst case, where the transmitter is assumed to be a perfect mirror reflecting the Sun's image normal to the atmosphere, the total energy received by the eye would be 3.36 x 10 to the -7th power watts. The eye's optics would blur the 5.6 sec of art image of the transmitter over a disk approximately 6 minutes arc in diameter, reducing the maximum intensity at the retina by 99%. A given cone in the retina would receive even less energy due to the constant random microtremors and microsaccadic movements of the eye muscles which move the retina over an area some 8 minutes of arc in radius, even during steady fixation. Therefore, very conservative estimates show that the reflections from the transmitter could be viewed safety for at least 3.2 hours and that the entire SPS structure could be viewed for a minimum of 1 hour. The solares mirror is briefly considered and is shown to be safe to view for at least 2.4 minutes.
Sol-gel derived Al-Ga co-doped transparent conducting oxide ZnO thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serrao, Felcy Jyothi, E-mail: jyothiserrao@gmail.com; Department of Physics, Karnataka Government Research centre SCEM, Mangalore, 575007; Sandeep, K. M.
2016-05-23
Transparent conducting ZnO doped with Al, Ga and co-doped Al and Ga (1:1) (AGZO) thin films were grown on glass substrates by cost effective sol-gel spin coating method. The XRD results showed that all the films are polycrystalline in nature and highly textured along the (002) plane. Enhanced grain size was observed in the case of AGZO thin films. The transmittance of all the films was more than 83% in the visible region of light. The electrical properties such as carrier concentration and mobility values are increased in case of AGZO compared to that of Al and Ga doped ZnOmore » thin films. The minimum resistivity of 2.54 × 10{sup −3} Ω cm was observed in AGZO thin film. The co-doped AGZO thin films exhibited minimum resistivity and high optical transmittance, indicate that co-doped ZnO thin films could be used in transparent electronics mainly in display applications.« less
For wind turbines in complex terrain, the devil is in the detail
NASA Astrophysics Data System (ADS)
Lange, Julia; Mann, Jakob; Berg, Jacob; Parvu, Dan; Kilpatrick, Ryan; Costache, Adrian; Chowdhury, Jubayer; Siddiqui, Kamran; Hangan, Horia
2017-09-01
The cost of energy produced by onshore wind turbines is among the lowest available; however, onshore wind turbines are often positioned in a complex terrain, where the wind resources and wind conditions are quite uncertain due to the surrounding topography and/or vegetation. In this study, we use a scale model in a three-dimensional wind-testing chamber to show how minor changes in the terrain can result in significant differences in the flow at turbine height. These differences affect not only the power performance but also the life-time and maintenance costs of wind turbines, and hence, the economy and feasibility of wind turbine projects. We find that the mean wind, wind shear and turbulence level are extremely sensitive to the exact details of the terrain: a small modification of the edge of our scale model, results in a reduction of the estimated annual energy production by at least 50% and an increase in the turbulence level by a factor of five in the worst-case scenario with the most unfavorable wind direction. Wind farm developers should be aware that near escarpments destructive flows can occur and their extent is uncertain thus warranting on-site field measurements.
Menu Plans: Maximum Nutrition for Minimum Cost.
ERIC Educational Resources Information Center
Texas Child Care, 1995
1995-01-01
Suggests that menu planning is the key to getting maximum nutrition in day care meals and snacks for minimum cost. Explores United States Department of Agriculture food pyramid guidelines for children and tips for planning menus and grocery shopping. Includes suggested meal patterns and portion sizes. (HTH)
Key node selection in minimum-cost control of complex networks
NASA Astrophysics Data System (ADS)
Ding, Jie; Wen, Changyun; Li, Guoqi
2017-11-01
Finding the key node set that is connected with a given number of external control sources for driving complex networks from initial state to any predefined state with minimum cost, known as minimum-cost control problem, is critically important but remains largely open. By defining an importance index for each node, we propose revisited projected gradient method extension (R-PGME) in Monte-Carlo scenario to determine key node set. It is found that the importance index of a node is strongly correlated to occurrence rate of that node to be selected as a key node in Monte-Carlo realizations for three elementary topologies, Erdős-Rényi and scale-free networks. We also discover the distribution patterns of key nodes when the control cost reaches its minimum. Specifically, the importance indices of all nodes in an elementary stem show a quasi-periodic distribution with high peak values in the beginning and end of a quasi-period while they approach to a uniform distribution in an elementary cycle. We further point out that an elementary dilation can be regarded as two elementary stems whose lengths are the closest, and the importance indices in each stem present similar distribution as in an elementary stem. Our results provide a better understanding and deep insight of locating the key nodes in different topologies with minimum control cost.
NASA Astrophysics Data System (ADS)
Febriana Aqidawati, Era; Sutopo, Wahyudi; Hisjam, Muh.
2018-03-01
Newspapers are products with special characteristics which are perishable, have a shorter range of time between the production and distribution, zero inventory, and decreasing sales value along with increasing in time. Generally, the problem of production and distribution in the paper supply chain is the integration of production planning and distribution to minimize the total cost. The approach used in this article to solve the problem is using an analytical model. In this article, several parameters and constraints have been considered in the calculation of the total cost of the integration of production and distribution of newspapers during the determined time horizon. This model can be used by production and marketing managers as decision support in determining the optimal quantity of production and distribution in order to obtain minimum cost so that company's competitiveness level can be increased.
Economics show CO2 EOR potential in central Kansas
Dubois, M.K.; Byrnes, A.P.; Pancake, R.E.; Willhite, G.P.; Schoeling, L.G.
2000-01-01
Carbon dioxide (CO2) enhanced oil recovery (EOR) may be the key to recovering hundreds of millions of bbl of trapped oil from the mature fields in central Kansas. Preliminary economic analysis indicates that CO2 EOR should provide an internal rate of return (IRR) greater than 20%, before income tax, assuming oil sells for \\$20/bbl, CO2 costs \\$1/Mcf, and gross utilization is 10 Mcf of CO2/bbl of oil recovered. If the CO2 cost is reduced to \\$0.75/Mcf, an oil price of $17/bbl yields an IRR of 20%. Reservoir and economic modeling indicates that IRR is most sensitive to oil price and CO2 cost. A project requires a minimum recovery of 1,500 net bbl/acre (about 1 million net bbl/1-mile section) under a best-case scenario. Less important variables to the economics are capital costs and non-CO2 related lease operating expenses.
Planning Education for Regional Economic Integration: The Case of Paraguay and MERCOSUR.
ERIC Educational Resources Information Center
McGinn, Noel
This paper examines the possible impact of MERCOSUR on Paraguay's economic and educational systems. MERCOSUR is a trade agreement among Argentina, Brazil, Paraguay, and Uruguay, under which terms all import tariffs among the countries will be eliminated by 1994. The countries will enter into a common economic market. The worst-case scenario…
Diameter-Constrained Steiner Tree
NASA Astrophysics Data System (ADS)
Ding, Wei; Lin, Guohui; Xue, Guoliang
Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.
Asteroid Bennu Temperature Maps for OSIRIS-REx Spacecraft and Instrument Thermal Analyses
NASA Technical Reports Server (NTRS)
Choi, Michael K.; Emery, Josh; Delbo, Marco
2014-01-01
A thermophysical model has been developed to generate asteroid Bennu surface temperature maps for OSIRIS-REx spacecraft and instrument thermal design and analyses at the Critical Design Review (CDR). Two-dimensional temperature maps for worst hot and worst cold cases are used in Thermal Desktop to assure adequate thermal design margins. To minimize the complexity of the Bennu geometry in Thermal Desktop, it is modeled as a sphere instead of the radar shape. The post-CDR updated thermal inertia and a modified approach show that the new surface temperature predictions are more benign. Therefore the CDR Bennu surface temperature predictions are conservative.
Availability Simulation of AGT Systems
DOT National Transportation Integrated Search
1975-02-01
The report discusses the analytical and simulation procedures that were used to evaluate the effects of failure in a complex dual mode transportation system based on a worst case study-state condition. The computed results are an availability figure ...
Carbon monoxide screen for signalized intersections COSIM, version 3.0 : technical documentation.
DOT National Transportation Integrated Search
2008-07-01
The Illinois Department of Transportation (IDOT) currently uses the computer screening model Illinois : CO Screen for Intersection Modeling (COSIM) to estimate worst-case CO concentrations for proposed roadway : projects affecting signalized intersec...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
RMP Guidance for Warehouses - Chapter 4: Offsite Consequence Analysis
Offsite consequence analysis (OCA) informs government and the public about potential consequences of an accidental toxic or flammable chemical release at your facility, and consists of a worst-case release scenario and alternative release scenarios.
RMP Guidance for Chemical Distributors - Chapter 4: Offsite Consequence Analysis
How to perform the OCA for regulated substances, informing the government and the public about potential consequences of an accidental chemical release at your facility. Includes calculations for worst-case scenario, alternative scenarios, and endpoints.
... damage to the tissue and bone supporting the teeth. In the worst cases, you can lose teeth. In gingivitis, the gums become red and swollen. ... flossing and regular cleanings by a dentist or dental hygienist. Untreated gingivitis can lead to periodontitis. If ...
NASA Technical Reports Server (NTRS)
Bury, Kristen M.; Kerslake, Thomas W.
2008-01-01
NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.
Response of the North American corn belt to climate warming, CO2
NASA Astrophysics Data System (ADS)
1983-08-01
The climate of the North American corn belt was characterized to estimate the effects of climatic change on that agricultural region. Heat and moisture characteristics of the current corn belt were identified and mapped based on a simulated climate for a doubling of atmospheric CO2 concentrations. The result was a map of the projected corn belt corresponding to the simulated climatic change. Such projections were made with and without an allowance for earlier planting dates that could occur under a CO2-induced climatic warming. Because the direct effects of CO2 increases on plants, improvements in farm technology, and plant breeding are not considered, the resulting projections represent an extreme or worst case. The results indicate that even for such a worst case, climatic conditions favoring corn production would not extend very far into Canada. Climatic buffering effects of the Great Lakes would apparently retard northeastward shifts in corn-belt location.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1985-01-01
For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.
Centaur Propellant Thermal Conditioning Study
NASA Technical Reports Server (NTRS)
Blatt, M. H.; Pleasant, R. L.; Erickson, R. C.
1976-01-01
A wicking investigation revealed that passive thermal conditioning was feasible and provided considerable weight advantage over active systems using throttled vent fluid in a Centaur D-1s launch vehicle. Experimental wicking correlations were obtained using empirical revisions to the analytical flow model. Thermal subcoolers were evaluated parametrically as a function of tank pressure and NPSP. Results showed that the RL10 category I engine was the best candidate for boost pump replacement and the option showing the lowest weight penalty employed passively cooled acquisition devices, thermal subcoolers, dry ducts between burns and pumping of subcooler coolant back into the tank. A mixing correlation was identified for sizing the thermodynamic vent system mixer. Worst case mixing requirements were determined by surveying Centaur D-1T, D-1S, IUS, and space tug vehicles. Vent system sizing was based upon worst case requirements. Thermodynamic vent system/mixer weights were determined for each vehicle.
Updated model assessment of pollution at major U. S. airports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamartino, R.J.; Rote, D.M.
1979-02-01
The air quality impact of aircraft at and around Los Angeles International Airport (LAX) was simulated for hours of peak aircraft operation and 'worst case' pollutant dispersion conditions by using an updated version of the Argonne Airport Vicinity Air Pollution model; field programs at LAX, O'Hara, and John F. Kennedy International Airports determined the 'worst case' conditions. Maximum carbon monoxide concentrations at LAX were low relative to National Ambient Air Quality Standards; relatively high and widespread hydrocarbon concentrations indicated that aircraft emissions may aggravate oxidant problems near the airport; nitrogen oxide concentrations were close to the levels set in proposedmore » standards. Data on typical time-in-mode for departing and arriving aircraft, the 8/4/77 diurnal variation in airport activity, and carbon monoxide concentration isopleths are given, and the update factors in the model are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundaram, Sriram; Grenat, Aaron; Naffziger, Samuel
Power management techniques can be effective at extracting more performance and energy efficiency out of mature systems on chip (SoCs). For instance, the peak performance of microprocessors is often limited by worst case technology (Vmax), infrastructure (thermal/electrical), and microprocessor usage assumptions. Performance/watt of microprocessors also typically suffers from guard bands associated with the test and binning processes as well as worst case aging/lifetime degradation. Similarly, on multicore processors, shared voltage rails tend to limit the peak performance achievable in low thread count workloads. In this paper, we describe five power management techniques that maximize the per-part performance under the before-mentionedmore » constraints. Using these techniques, we demonstrate a net performance increase of up to 15% depending on the application and TDP of the SoC, implemented on 'Bristol Ridge,' a 28-nm CMOS, dual-core x 86 accelerated processing unit.« less
VEGA Launch Vehicle Dynamic Environment: Flight Experience and Qualification Status
NASA Astrophysics Data System (ADS)
Di Trapani, C.; Fotino, D.; Mastrella, E.; Bartoccini, D.; Bonnet, M.
2014-06-01
VEGA Launch Vehicle (LV) during flight is equipped with more than 400 sensors (pressure transducers, accelerometers, microphones, strain gauges...) aimed to catch the physical phenomena occurring during the mission. Main objective of these sensors is to verify that the flight conditions are compliant with the launch vehicle and satellite qualification status and to characterize the phenomena that occur during flight. During VEGA development, several test campaigns have been performed in order to characterize its dynamic environment and identify the worst case conditions, but only with the flight data analysis is possible to confirm the worst cases identified and check the compliance of the operative life conditions with the components qualification status.Scope of the present paper is to show a comparison of the sinusoidal dynamic phenomena that occurred during VEGA first and second flight and give a summary of the launch vehicle qualification status.
NASA Astrophysics Data System (ADS)
Bury, Kristen M.; Kerslake, Thomas W.
2008-06-01
NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
Statistical analysis of QC data and estimation of fuel rod behaviour
NASA Astrophysics Data System (ADS)
Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.
1991-02-01
The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm
NASA Astrophysics Data System (ADS)
Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.
2014-11-01
minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.
A network flow model for load balancing in circuit-switched multicomputers
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1990-01-01
In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.
Potential Cost Savings Associated with a Reduction of Stress Fractures among US Army Basic Trainees
1984-07-01
results. 6 The concept of sufficient rest lends support to Scully and Worthen, who suggest incorporating rest periods in the training 14 regimen as a...of stress reactions. In these cases, physical stature apparently 9 had more impact than physical conditioning. The concept of developing criteria...to meet the minimum standard, he/she would be separated. A variation of the corrective conditioning concept has been instituted at Fort Knox. Project
Wiegman, Adrian R H; Day, John W; D'Elia, Christopher F; Rutherford, Jeffrey S; Morris, James T; Roy, Eric D; Lane, Robert R; Dismukes, David E; Snyder, Brian F
2018-03-15
Over 25% of Mississippi River delta plain (MRDP) wetlands were lost over the past century. There is currently a major effort to restore the MRDP focused on a 50-year time horizon, a period during which the energy system and climate will change dramatically. We used a calibrated MRDP marsh elevation model to assess the costs of hydraulic dredging to sustain wetlands from 2016 to 2066 and 2016 to 2100 under a range of scenarios for sea level rise, energy price, and management regimes. We developed a subroutine to simulate dredging costs based on the price of crude oil and a project efficiency factor. Crude oil prices were projected using forecasts from global energy models. The costs to sustain marsh between 2016 and 2100 changed from $128,000/ha in the no change scenario to ~$1,010,000/ha in the worst-case scenario for sea level rise and energy price, an ~8-fold increase. Increasing suspended sediment concentrations, which is possible using managed river diversions, raised created marsh lifespan and decreased long term dredging costs. Created marsh lifespan changed nonlinearly with dredging fill elevation and suspended sediment level. Cost effectiveness of marsh creation and nourishment can be optimized by adjusting dredging fill elevation to the local sediment regime. Regardless of management scenario, sustaining the MRDP with hydraulic dredging suffered declining returns on investment due to the convergence of energy and climate trends. Marsh creation will likely become unaffordable in the mid to late 21st century, especially if river sediment diversions are not constructed before 2030. We recommend that environmental managers take into consideration coupled energy and climate scenarios for long-term risk assessments and adjust restoration goals accordingly. Copyright © 2017 Elsevier B.V. All rights reserved.
A multi-period optimization model for energy planning with CO(2) emission consideration.
Mirzaesmaeeli, H; Elkamel, A; Douglas, P L; Croiset, E; Gupta, M
2010-05-01
A novel deterministic multi-period mixed-integer linear programming (MILP) model for the power generation planning of electric systems is described and evaluated in this paper. The model is developed with the objective of determining the optimal mix of energy supply sources and pollutant mitigation options that meet a specified electricity demand and CO(2) emission targets at minimum cost. Several time-dependent parameters are included in the model formulation; they include forecasted energy demand, fuel price variability, construction lead time, conservation initiatives, and increase in fixed operational and maintenance costs over time. The developed model is applied to two case studies. The objective of the case studies is to examine the economical, structural, and environmental effects that would result if the electricity sector was required to reduce its CO(2) emissions to a specified limit. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Priest, G. R.; Goldfinger, C.; Wang, K.; Witter, R. C.; Zhang, Y.; Baptista, A.
2008-12-01
To update the tsunami hazard assessment method for Oregon, we (1) evaluate geologically reasonable variability of the earthquake rupture process on the Cascadia megathrust, (2) compare those scenarios to geological and geophysical evidence for plate locking, (3) specify 25 deterministic earthquake sources, and (4) use the resulting vertical coseismic deformations as initial conditions for simulation of Cascadia tsunami inundation at Cannon Beach, Oregon. Because of the Cannon Beach focus, the north-south extent of source scenarios is limited to Neah Bay, Washington to Florence, Oregon. We use the marine paleoseismic record to establish recurrence bins from the 10,000 year event record and select representative coseismic slips from these data. Assumed slips on the megathrust are 8.4 m (290 yrs of convergence), 15.2 m (525 years of convergence), 21.6 m (748 years of convergence), and 37.5 m (1298 years of convergence) which, if the sources were extended to the entire Cascadia margin, give Mw varying from approximately 8.3 to 9.3. Additional parameters explored by these scenarios characterize ruptures with a buried megathrust versus splay faulting, local versus regional slip patches, and seaward skewed versus symmetrical slip distribution. By assigning variable weights to the 25 source scenarios using a logic tree approach, we derived percentile inundation lines that express the confidence level (percentage) that a Cascadia tsunami will NOT exceed the line. Lines of 50, 70, 90, and 99 percent confidence correspond to maximum runup of 8.9, 10.5, 13.2, and 28.4 m (NAVD88). The tsunami source with highest logic tree weight (preferred scenario) involved rupture of a splay fault with 15.2 m slip that produced tsunami inundation near the 70 percent confidence line. Minimum inundation consistent with the inland extent of three Cascadia tsunami sand layers deposited east of Cannon Beach within the last 1000 years suggests a minimum of 15.2 m slip on buried megathrust ruptures. The largest tsunami run-up at the 99 percent isoline was from 37.5 m slip partitioned to a splay fault. This type of extreme event is considered to be very rare, perhaps once in 10,000 years based on offshore paleoseismic evidence, but it can produce waves rivaling the 2004 Indian Ocean tsunami. Cascadia coseismic deformation most similar to the Indian Ocean earthquake produced generally smaller tsunamis than at the Indian Ocean due mostly to the 1 km shallower water depth on the Cascadia margin. Inundation from distant tsunami sources was assessed by simulation of only two Mw 9.2 earthquakes in the Gulf of Alaska, a hypothetical worst-case developed by the Tsunami Pilot Study Working Group (2006) and a historical worst case, the 1964 Prince William Sound Earthquake; maximum runups were, respectively, 12.4 m and 7.5 m.
Russell, Kathy A; Milne, Andrew D; Varma, Devesh; Josephson, Keith; Lee, J Michael
2011-01-01
The purposes of this study were (1) to develop imaging methods and objective numeric parameters to describe nose morphology, and (2) to correlate those parameters with nasal esthetics for patients with clefts. A total of 28 patients with repaired complete unilateral cleft lip and palate (CUCLP) and 20 age- and gender-matched individuals without clefts were identified. A panel of orthodontists rated and ranked nasal esthetics from nose casts for the cleft group. Best and worst esthetic cleft groups were established from the cast assessments. Three-dimensional surface coordinates of the casts were digitally mapped with an electromagnetic tracking device. Digitized nasal images were oriented, voxelated, sliced, and mathematically curve-fitted. Maximum difference, percent area difference, and maximum and minimum derivative differences between cleft and noncleft and between right and left nose sides were calculated. Differences in parameters between groups were assessed with the use of analysis of variance (ANOVA) and t tests, and correlations with esthetics were assessed with the Spearman rank correlation test. Differences were seen between cleft and noncleft and best and worst esthetic groups for all four parameters (p < .05). The best esthetic cleft group had (1) lower percent area difference (p < .0001), (2) lower maximum difference (p < .001), and (3) smaller differences in slope of the nose in the coronal plane (p < .0001) than the worst esthetic cleft group. Maximum difference and maximum derivative difference and, to a lesser degree, percent area difference can be used to identify differences between cleft and noncleft nasal morphology and to assess levels of nasal esthetics for patients with CUCLP.
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
NASA Technical Reports Server (NTRS)
Rosmait, Russell L.
1996-01-01
The development of a new space transportation system in a climate of constant budget cuts and staff reductions can be and is a difficult task. It is no secret that NASA's current launching system consumes a very large portion of NASA funding and requires a large army of people to operate & maintain the system. The new Reusable Launch Vehicle (RLV) project and it's programs are faced with a monumental task of making the cost of access to space dramatically lower and more efficient than NASA's current system. With pressures from congressional budget cutters and also increased competition and loss of market share from international agencies RLV's first priority is to develop a 'low-cost, reliable transportation to earth orbit.' One of the RLV's major focus in achieving low-cost, reliable transportation to earth orbit is to rely on the maturing of advanced technologies. The technologies for the RLV are numerous and varied. Trying to assess their current status, within the RLV development program is paramount. There are several ways to assess these technologies. One way is through the use of Technology Readiness Levels (TRL's). This project focused on establishing current (summer 95) 'worst case' TRL's for six selected technologies that are under consideration for use within the RLV program. The six technologies evaluated were Concurrent Engineering, Embedded Sensor Technology, Rapid Prototyping, Friction Stir Welding, Thermal Spray Coatings, and VPPA Welding.
Resistance delaying strategies on UK sheep farms: A cost benefit analysis.
Learmount, Jane; Glover, Mike J; Taylor, Mike A
2018-04-30
UK guidelines for the sustainable control of parasites in sheep (SCOPS) were formulated with the primary aim of delaying development of anthelmintic resistance (AR) on UK sheep farms. Promoting their use requires the engagement and commitment of stakeholders. An important driver for behavioural change in sheep farmers is evidence of economic benefits. A recent evaluation of SCOPS guidance in practice demonstrated a significant reduction in anthelmintic use, suggesting economic benefits through a direct reduction in product and labour costs. However, in order to maintain production, a range of alternative control strategies are advised, resulting in additional costs to farmers and so a full cost benefit analysis of best practice management was undertaken. We allocated financial values to the management recommendations described in the SCOPS technical manual. Benefits were calculated using data for production variables and anthelmintic use measured during studies to evaluate the effect of SCOPS recommendations on 16 UK sheep farms and from other published work. As SCOPS control is not prescriptive and a range of different diagnostics are available, best and worst case scenarios were presented, comparing the cheapest methods (e.g. egg counts without larval culture) and management situations (e.g closed flocks not requiring quarantine treatments) with the most laborious and expensive. Simulations were run for farms with a small, medium or large flock (300; 1000; 1900 ewes) as well as comparing scenarios with and without potential production benefits from using effective wormers. Analysis demonstrated a moderate cost for all farms under both scenarios when production benefits were not included. A cost benefit was demonstrated for medium and large farms when production benefits were included and the benefit could be perceived as significant in the case of the large farms for the best case scenario (>£5000 per annum). Despite a significant potential reduction in anthelmintic use by farmers employing SCOPS guidance, the very low price of the older anthelmintic classes meant that the benefit did not always outweigh the additional management/diagnostic costs unless an increase in production was also achieved. This is an important finding. Focussing research on key innovations that will improve the cost effectiveness of diagnostic assays in a diagnostic driven control strategy, as well as designing treatment options that can improve production outcomes, and presenting them in a clear and transparent way, must be high priority goals. Coupling targeted research with improvements in the delivery of messages to the end user is important in the light of increasing global concerns over drug resistance. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Socolovsky, Mariano; Di Masi, Gilda; Binaghi, Daniela; Campero, Alvaro; Páez, Miguel Domínguez; Dubrovsky, Alberto
2014-01-01
Thoracic Outlet Syndrome is a compression of the brachial plexus that remains highly controversial. Classification in True or Neurogenic Outlet (TTO) and Disputed or Non-neurogenic Outlet (DTO) is becoming very popular. The former is characterized by a muscular atrophy of the intrinsic muscles of the hand, while the latter has only sensitive symptoms. The purpose of this article is to analyze the results obtained in a series of 31 patients. All patients with diagnosis of Thoracic Outlet operated between January 2003 and December 2012 with a minimum follow-up of six months where included. Age, sex, symptoms, classification, preoperative studies results, complications and recurrences were analyzed. 31 surgeries performed in 30 patients, 9 with TTO (8 women, mean age 24.3 years) and 21 with DTO (18 women, mean age 37.4 years, 1 recurrence) were included. Ninety percent of patients presented neurophysiological and 66.6% imagenological preoperative disturbances. All TTO and only 36.7% of DTO showed clear pathological findings during surgical exploration. A high percentage (87,5% sensitive and 77.7% motor) of TTO ameliorated after surgical decompression. Only 45.5% of DTO showed permanent positive changes, 13.6% temporary, 36.6% no changes, and 4.5%(one case) showed deterioration after decompresive surgery. Complications after surgery were more frequent –but temporary- in TTO cases (33.3%), than in DTO (13.6%). TTO showed a favorable outcome after surgery. DTO showed a worst –but still positive- postoperative result if patients are selected properly. These data are in concordance with other recent reports.
ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?
La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim
2014-10-01
Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.
McBain, Ryan K; Salhi, Carmel; Hann, Katrina; Salomon, Joshua A; Kim, Jane J; Betancourt, Theresa S
2016-01-01
Background: One billion children live in war-affected regions of the world. We conducted the first cost-effectiveness analysis of an intervention for war-affected youth in sub-Saharan Africa, as well as a broader cost analysis. Methods: The Youth Readiness Intervention (YRI) is a behavioural treatment for reducing functional impairment associated with psychological distress among war-affected young persons. A randomized controlled trial was conducted in Freetown, Sierra Leone, from July 2012 to July 2013. Participants (n = 436, aged 15–24) were randomized to YRI (n = 222) or care as usual (n = 214). Functional impairment was indexed by the World Health Organization Disability Assessment Scale; scores were converted to quality-adjusted life years (QALYs). An ‘ingredients approach’ estimated financial and economic costs, assuming a societal perspective. Incremental cost-effectiveness ratios (ICERs) were also expressed in terms of gains across dimensions of mental health and schooling. Secondary analyses explored whether intervention effects were largest among those worst-off (upper quartile) at baseline. Results: Retention at 6-month follow-up was 85% (n = 371). The estimated economic cost of the intervention was $104 per participant. Functional impairment was lower among YRI recipients, compared with controls, following the intervention but not at 6-month follow-up, and yielded an ICER of $7260 per QALY gained. At 8-month follow-up, teachers’ interviews indicated that YRI recipients observed higher school enrolment [P < 0.001, odds ratio (OR) 8.9], denoting a cost of $431 per additional school year gained, as well as better school attendance (P = 0.007, OR 34.9) and performance (P = 0.03, effect size = −1.31). Secondary analyses indicated that the intervention was cost-effective among those worst-off at baseline, yielding an ICER of $3564 per QALY gained. Conclusions: The YRI is not cost-effective at a willingness-to-pay threshold of three times average gross domestic product per capita. However, results indicate that the YRI translated into a range of benefits, such as improved school enrolment, not captured by cost-effectiveness analysis. We also outline areas for modification to improve cost-effectiveness in future trials. Trial Registration: clinicaltrials.gov Identifier: RPCGA-YRI-21003 PMID:26345320
McBain, Ryan K; Salhi, Carmel; Hann, Katrina; Salomon, Joshua A; Kim, Jane J; Betancourt, Theresa S
2016-05-01
One billion children live in war-affected regions of the world. We conducted the first cost-effectiveness analysis of an intervention for war-affected youth in sub-Saharan Africa, as well as a broader cost analysis. The Youth Readiness Intervention (YRI) is a behavioural treatment for reducing functional impairment associated with psychological distress among war-affected young persons. A randomized controlled trial was conducted in Freetown, Sierra Leone, from July 2012 to July 2013. Participants (n = 436, aged 15-24) were randomized to YRI (n = 222) or care as usual (n = 214). Functional impairment was indexed by the World Health Organization Disability Assessment Scale; scores were converted to quality-adjusted life years (QALYs). An 'ingredients approach' estimated financial and economic costs, assuming a societal perspective. Incremental cost-effectiveness ratios (ICERs) were also expressed in terms of gains across dimensions of mental health and schooling. Secondary analyses explored whether intervention effects were largest among those worst-off (upper quartile) at baseline. Retention at 6-month follow-up was 85% (n = 371). The estimated economic cost of the intervention was $104 per participant. Functional impairment was lower among YRI recipients, compared with controls, following the intervention but not at 6-month follow-up, and yielded an ICER of $7260 per QALY gained. At 8-month follow-up, teachers' interviews indicated that YRI recipients observed higher school enrolment [P < 0.001, odds ratio (OR) 8.9], denoting a cost of $431 per additional school year gained, as well as better school attendance (P = 0.007, OR 34.9) and performance (P = 0.03, effect size = -1.31). Secondary analyses indicated that the intervention was cost-effective among those worst-off at baseline, yielding an ICER of $3564 per QALY gained. The YRI is not cost-effective at a willingness-to-pay threshold of three times average gross domestic product per capita. However, results indicate that the YRI translated into a range of benefits, such as improved school enrolment, not captured by cost-effectiveness analysis. We also outline areas for modification to improve cost-effectiveness in future trials. clinicaltrials.gov Identifier: RPCGA-YRI-21003. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
How America Pays for College, 2011. Sallie Mae's National Study of College Students and Parents
ERIC Educational Resources Information Center
Sallie Mae, Inc., 2011
2011-01-01
Sallie Mae's national study, "How America Pays for College," now in its fourth year, shows the resilience of American families' strongly held belief in the value of a college education. Even in the face of rising tuition costs and the worst economic decline in a generation, between academic years 2007-2008 and 2009-2010 Americans paid increasingly…
42 CFR 412.348 - Exception payments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... beginning on or after October 1, 1991 and before October 1, 2001. (c) Minimum payment level by class of hospital. (1) CMS establishes a minimum payment level by class of hospital. The minimum payment level for a hospital will equal a fixed percentage of the hospital's capital-related costs. The minimum payment levels...
Quadratic RK shooting solution for a environmental parameter prediction boundary value problem
NASA Astrophysics Data System (ADS)
Famelis, Ioannis Th.; Tsitouras, Ch.
2014-10-01
Using tools of Information Geometry, the minimum distance between two elements of a statistical manifold is defined by the corresponding geodesic, e.g. the minimum length curve that connects them. Such a curve, where the probability distribution functions in the case of our meteorological data are two parameter Weibull distributions, satisfies a 2nd order Boundary Value (BV) system. We study the numerical treatment of the resulting special quadratic form system using Shooting method. We compare the solutions of the problem when we employ a classical Singly Diagonally Implicit Runge Kutta (SDIRK) 4(3) pair of methods and a quadratic SDIRK 5(3) pair . Both pairs have the same computational costs whereas the second one attains higher order as it is specially constructed for quadratic problems.
NASA Astrophysics Data System (ADS)
Almalaq, Yasser; Matin, Mohammad A.
2014-09-01
The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.
Learning Search Control Knowledge for Deep Space Network Scheduling
NASA Technical Reports Server (NTRS)
Gratch, Jonathan; Chien, Steve; DeJong, Gerald
1993-01-01
While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.
Availability Analysis of Dual Mode Systems
DOT National Transportation Integrated Search
1974-04-01
The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...
Part of a May 1999 series on the Risk Management Program Rule and issues related to chemical emergency management. Explains hazard versus risk, worst-case and alternative release scenarios, flammable endpoints and toxic endpoints.
General RMP Guidance - Chapter 4: Offsite Consequence Analysis
This chapter provides basic compliance information, not modeling methodologies, for people who plan to do their own air dispersion modeling. OCA is a required part of the risk management program, and involves worst-case and alternative release scenarios.
INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS
The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2011 CFR
2011-07-01
... limits of current technology, for the range of environmental conditions anticipated at your facility; and... Society for Testing of Materials (ASTM) publication F625-94, Standard Practice for Describing...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., materials, support vessels, and strategies listed are suitable, within the limits of current technology, for... equipment. Examples of acceptable terms include those defined in American Society for Testing of Materials...
[Costs of health. Costs-effectiveness in case of lifestyle changes].
Apor, Péter
2010-05-09
Economical burden for the individuals and for the national budgets of chronic cardio-vasculo-metabolic diseases is high and is rapidly increasing. Costs of treatments and prevention are very different in countries of diverse culture, ethnicity, social-economical situations, but prevention with healthy foods and with adequate physical activity are cheaper than medicines anywhere in the world. A great couple of studies approved cost-effectiveness of interventions directed to the change of life style factors. Cheaper is to influence the whole, yet healthy population, but interventions on people with high risk are more target-specific and usually more expensive. Enhanced physical activity (minimum 30 minutes five times per week with low-medium intensity, plus resistance exercises for maintain the muscle mass and force, plus stretching and calisthenics to maintain joints motility) can be promoted by few hundred-few ten hundred euros or dollars. Price of gain in Quality/Disability-Adjusted Life Years expressed as Incremental Cost Effectiveness/Utility Ratio is known, estimated or modelled, and offers a good value of money.
Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.
Baugh, Tiffany P; Franzese, Christine B
2017-06-01
Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.
Characteristics of worst hour rainfall rate for radio wave propagation modelling in Nigeria
NASA Astrophysics Data System (ADS)
Osita, Ibe; Nymphas, E. F.
2017-10-01
Radio waves especially at the millimeter-wave band are known to be attenuated by rain. Radio engineers and designers need to be able to predict the time of the day when radio signal will be attenuated so as to provide measures to mitigate this effect. This is achieved by characterizing the rainfall intensity for a particular region of interest into worst month and worst hour of the day. This paper characterized rainfall in Nigeria into worst year, worst month, and worst hour. It is shown that for the period of study, 2008 and 2009 are the worst years, while September is the most frequent worst month in most of the stations. The evening time (LT) is the worst hours of the day in virtually all the stations.
Smolen, Harry J; Murphy, Daniel R; Gahn, James C; Yu, Xueting; Curtis, Bradley H
2014-09-01
The treatment for patients with type 2 diabetes mellitus (T2DM) follows a stepwise progression. As a treatment loses its effectiveness, it is typically replaced with a more complex and frequently more costly treatment. Eventually this progression leads to the use of basal insulin typically with concomitant treatments (e.g., metformin, a GLP-1 RA [glucagon-like peptide-1 receptor agonist], a TZD [thiazolidinedione] or a DPP-4i [dipeptidyl peptidase 4 inhibitor]) and, ultimately, to basal-bolus insulin in some forms. As the cost of oral antidiabetics (OADs) and noninsulin injectables have approached, and in some cases exceeded, the cost of insulin, we reexamined the placement of insulin in T2DM treatment progression. Our hypothesis was that earlier use of insulin produces clinical and cost benefits due to its superior efficacy and treatment scalability at an acceptable cost when considered over a 5-year period. To (a) estimate clinical and payer cost outcomes of initiating insulin treatment for patients with T2DM earlier in their treatment progression and (b) estimate clinical and payer cost outcomes resulting from delays in escalating treatment for T2DM when indicated by patient hemoglobin A1c levels. We developed a Monte Carlo microsimulation model to estimate patients reaching target A1c, diabetes-related complications, mortality, and associated costs under various treatment strategies for newly diagnosed patients with T2DM. Treatment efficacies were modeled from results of randomized clinical trials, including the time and rate of A1c drift. A typical treatment progression was selected based on the American Diabetes Association and the European Association for the Study of Diabetes guidelines as the standard of care (SOC). Two treatment approaches were evaluated: two-stage insulin (basal plus antidiabetics followed by biphasic plus metformin) and single-stage insulin (biphasic plus metformin). For each approach, we analyzed multiple strategies. For each analysis, treatment steps were sequentially and cumulatively removed from the SOC until only the insulin steps remained. Delays in escalating treatment were evaluated by increasing the minimum time on a treatment within each strategy. The analysis time frame was 5 years. Relative to SOC, the two-stage insulin approach resulted in 0.10% to 1.79% more patients achieving target A1c (<7.0%), at incremental costs of $95 to $3,267. (The ranges are due to the different strategies within the approach.) With the single-stage approach, 0.50% to 2.63% more patients achieved the target A1c compared with SOC at an incremental cost of -$1,642 to $1,177. Major diabetes-related complications were reduced by 0.38% to 17.46% using the two-stage approach and 0.72% to 25.92% using the single-stage approach. Severe hypoglycemia increased by 17.97% to 60.43% using the two-stage approach and 6.44% to 68.87% using the single-stage approach. In the base case scenario, the minimum time on a specific treatment was 3 months. When the minimum time on each treatment was increased to 12 months (i.e., delayed), patients reaching A1c targets were reduced by 57%, complications increased by 13% to 76%, and mortality increased by 8% over 5 years when compared with the base case for the SOC. However, severe hypoglycemic events were reduced by 83%. As insulin was advanced earlier in therapy in the two-stage and single-stage approaches, patients reaching their A1c targets increased, severe hypoglycemic events increased, and diabetes-related complications and mortality decreased. Cost savings were estimated for 3 (of 4) strategies in the single-stage approach. Delays in treatment escalation substantially reduced patients reaching target A1c levels and increased the occurrence of major nonhypoglycemic diabetic complications. With the exception of substantial increases in severe hypoglycemic events, earlier use of insulin mitigates the clinical consequences of these delays.
Stressful life events and catechol-O-methyl-transferase (COMT) gene in bipolar disorder.
Hosang, Georgina M; Fisher, Helen L; Cohen-Woods, Sarah; McGuffin, Peter; Farmer, Anne E
2017-05-01
A small body of research suggests that gene-environment interactions play an important role in the development of bipolar disorder. The aim of the present study is to contribute to this work by exploring the relationship between stressful life events and the catechol-O-methyl-transferase (COMT) Val 158 Met polymorphism in bipolar disorder. Four hundred eighty-two bipolar cases and 205 psychiatrically healthy controls completed the List of Threatening Experiences Questionnaire. Bipolar cases reported the events experienced 6 months before their worst depressive and manic episodes; controls reported those events experienced 6 months prior to their interview. The genotypic information for the COMT Val 158 Met variant (rs4680) was extracted from GWAS analysis of the sample. The impact of stressful life events was moderated by the COMT genotype for the worst depressive episode using a Val dominant model (adjusted risk difference = 0.09, 95% confidence intervals = 0.003-0.18, P = .04). For the worst manic episodes no significant interactions between COMT and stressful life events were detected. This is the first study to explore the relationship between stressful life events and the COMT Val 158 Met polymorphism focusing solely on bipolar disorder. The results of this study highlight the importance of the interplay between genetic and environmental factors for bipolar depression. © 2017 Wiley Periodicals, Inc.
Using Water Transfers to Manage Supply Risk
NASA Astrophysics Data System (ADS)
Characklis, G. W.
2007-12-01
Most cities currently rely on water supplies with sufficient capacity to meet demand under almost all conditions. However, the rising costs of water supply development make the maintenance of infrequently used excess capacity increasingly expensive, and more utilities are considering the use of water transfers as a means of more cost effectively meeting demand under drought conditions. Transfers can take place between utilities, as well as different user groups (e.g., municipal and agricultural), and can involve both treated and untreated water. In cases where both the "buyer" and "seller" draw water from the same supply, contractual agreements alone can facilitate a transfer, but in other cases new infrastructure (e.g., pipelines) will be required. Developing and valuing transfer agreements and/or infrastructure investments requires probabilistic supply/demand analyses that incorporate elements of both hydrology and economics. The complexity of these analyses increases as more sophisticated types of agreements (e. g., options) are considered, and as utilities begin to consider how to integrate transfers into long-term planning efforts involving a more diversified portfolio of supply assets. This discussion will revolve around the methods used to develop minimum (expected) cost portfolios of supply assets that meet specified reliability goals. Two different case studies, one in both the eastern and western U.S., will be described with attention to: the role that transfers can play in reducing average supply costs; tradeoffs between costs and supply reliability, and; the effects of different transfer agreement types on the infrastructure capacity required to complete the transfers. Results will provide insights into the cost savings potential of more flexible water supply strategies.
Economic losses to buildings due to tsunami impact: the case of Rhodes city, Greece
NASA Astrophysics Data System (ADS)
Triantafyllou, Ioanna; Novikova, Tatyana; Papadopoulos, Gerassimos
2017-04-01
The expected economic losses to buildings due to the tsunami impact is of particular importance for the tsunami risk management. However, only few efforts can be found in this direction. In this study we approached this issue selecting the city of Rhodes Isl., Greece, as a test-site. The methodological steps followed include (a) selection of worst case scenario in the study area based on the tsunami history of the area which includes several powerful events, e.g. 142 AD, 1303, 1481, 1609, 1741, (b) numerical simulation of the tsunami and determination of the inundation zone, (c) application of the DAMASCHE empirical tool, produced by the SCHEMA EU-FP6 project, for the calculation of the damage level expected at each one of the buildings as a function of the water depth in the inundation area, (d) calculation of the buildings that would need reparation after partial damage and of those that would need reconstruction after total destruction, (e) calculation of the cost implied for both reparation and reconstruction. The several data sets which are needed for the execution of these steps, are susceptible to uncertainties and, therefore, the final results are quite sensitive to changes of the data sets. Alternative costs were calculated by taking into account the several uncertainties involved. This research is a contribution to the EU-FP7 tsunami research project ASTARTE (Assessment, Strategy And Risk Reduction for Tsunamis in Europe), grant agreement no: 603839, 2013-10-30.
A framework for multi-stakeholder decision-making and ...
We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study
ERIC Educational Resources Information Center
Zirkel, Sabrina; Pollack, Terry M.
2016-01-01
We present a case analysis of the controversy and public debate generated from a school district's efforts to address racial inequities in educational outcomes by diverting special funds from the highest performing students seeking elite college admissions to the lowest performing students who were struggling to graduate from high school.…
2008-03-01
Adversarial Tripolarity ................................................................................... VII-1 VIII. Fallen Nuclear Dominoes...power dimension, it is possible to imagine a best case (deep concert) and a worst case (adversarial tripolarity ) and some less extreme outcomes, one...vanquished and the sub-regions have settled into relative stability). 5. Adversarial U.S.-Russia-China tripolarity : In this world, the regional
ERIC Educational Resources Information Center
Marginson, Simon
This study examined the character of the emerging systems of corporate management in Australian universities and their effects on academic and administrative practices, focusing on relations of power. Case studies were conducted at 17 individual universities of various types. In each institution, interviews were conducted with senior…
Elementary Social Studies in 2005: Danger or Opportunity?--A Response to Jeff Passe
ERIC Educational Resources Information Center
Libresco, Andrea S.
2006-01-01
From the emphasis on lower-level test-prep materials to the disappearance of the subject altogether, elementary social studies is, in the best case scenario, being tested and, thus, taught with a heavy emphasis on recall; and, in the worst-case scenario, not being taught at all. In this article, the author responds to Jeff Passe's views on…
Thermal Analysis of a Metallic Wing Glove for a Mach-8 Boundary-Layer Experiment
NASA Technical Reports Server (NTRS)
Gong, Leslie; Richards, W. Lance
1998-01-01
A metallic 'glove' structure has been built and attached to the wing of the Pegasus(trademark) space booster. An experiment on the upper surface of the glove has been designed to help validate boundary-layer stability codes in a free-flight environment. Three-dimensional thermal analyses have been performed to ensure that the glove structure design would be within allowable temperature limits in the experiment test section of the upper skin of the glove. Temperature results obtained from the design-case analysis show a peak temperature at the leading edge of 490 F. For the upper surface of the glove, approximately 3 in. back from the leading edge, temperature calculations indicate transition occurs at approximately 45 sec into the flight profile. A worst-case heating analysis has also been performed to ensure that the glove structure would not have any detrimental effects on the primary objective of the Pegasus a launch. A peak temperature of 805 F has been calculated on the leading edge of the glove structure. The temperatures predicted from the design case are well within the temperature limits of the glove structure, and the worst-case heating analysis temperature results are acceptable for the mission objectives.
Ogah, Okechukwu S.; Stewart, Simon; Onwujekwe, Obinna E.; Falase, Ayodele O.; Adebayo, Saheed O.; Olunuga, Taiwo; Sliwa, Karen
2014-01-01
Background: Heart failure (HF) is a deadly, disabling and often costly syndrome world-wide. Unfortunately, there is a paucity of data describing its economic impact in sub Saharan Africa; a region in which the number of relatively younger cases will inevitably rise. Methods: Heath economic data were extracted from a prospective HF registry in a tertiary hospital situated in Abeokuta, southwest Nigeria. Outpatient and inpatient costs were computed from a representative cohort of 239 HF cases including personnel, diagnostic and treatment resources used for their management over a 12-month period. Indirect costs were also calculated. The annual cost per person was then calculated. Results: Mean age of the cohort was 58.0±15.1 years and 53.1% were men. The total computed cost of care of HF in Abeokuta was 76, 288,845 Nigerian Naira (US$508, 595) translating to 319,200 Naira (US$2,128 US Dollars) per patient per year. The total cost of in-patient care (46% of total health care expenditure) was estimated as 34,996,477 Naira (about 301,230 US dollars). This comprised of 17,899,977 Naira- 50.9% ($US114,600) and 17,806,500 naira −49.1%($US118,710) for direct and in-direct costs respectively. Out-patient cost was estimated as 41,292,368 Naira ($US 275,282). The relatively high cost of outpatient care was largely due to cost of transportation for monthly follow up visits. Payments were mostly made through out-of-pocket spending. Conclusion: The economic burden of HF in Nigeria is particularly high considering, the relatively young age of affected cases, a minimum wage of 18,000 Naira ($US120) per month and considerable component of out-of-pocket spending for those affected. Health reforms designed to mitigate the individual to societal burden imposed by the syndrome are required. PMID:25415310
Ogah, Okechukwu S; Stewart, Simon; Onwujekwe, Obinna E; Falase, Ayodele O; Adebayo, Saheed O; Olunuga, Taiwo; Sliwa, Karen
2014-01-01
Heart failure (HF) is a deadly, disabling and often costly syndrome world-wide. Unfortunately, there is a paucity of data describing its economic impact in sub Saharan Africa; a region in which the number of relatively younger cases will inevitably rise. Heath economic data were extracted from a prospective HF registry in a tertiary hospital situated in Abeokuta, southwest Nigeria. Outpatient and inpatient costs were computed from a representative cohort of 239 HF cases including personnel, diagnostic and treatment resources used for their management over a 12-month period. Indirect costs were also calculated. The annual cost per person was then calculated. Mean age of the cohort was 58.0 ± 15.1 years and 53.1% were men. The total computed cost of care of HF in Abeokuta was 76, 288,845 Nigerian Naira (US$508, 595) translating to 319,200 Naira (US$2,128 US Dollars) per patient per year. The total cost of in-patient care (46% of total health care expenditure) was estimated as 34,996,477 Naira (about 301,230 US dollars). This comprised of 17,899,977 Naira- 50.9% ($US114,600) and 17,806,500 naira -49.1%($US118,710) for direct and in-direct costs respectively. Out-patient cost was estimated as 41,292,368 Naira ($US 275,282). The relatively high cost of outpatient care was largely due to cost of transportation for monthly follow up visits. Payments were mostly made through out-of-pocket spending. The economic burden of HF in Nigeria is particularly high considering, the relatively young age of affected cases, a minimum wage of 18,000 Naira ($US120) per month and considerable component of out-of-pocket spending for those affected. Health reforms designed to mitigate the individual to societal burden imposed by the syndrome are required.
Medical education, cost and policy: what are the drivers for change? Commentary.
Walsh, Kieran
2014-01-01
Medical education is expensive. Its expense has led many stakeholders to speculate on how costs could be reduced. In an ideal world such decisions would be made on sound evidence; however this is impossible in the absence of evidence. Sometimes practice will be informed by policy, but policy will not always be evidence based. So how is policy in the field of cost and value in medical education actually developed? The foremost influence on policy in cost and value should be evidence-based knowledge. Unfortunately policy is sometimes influenced by what might at best be termed tradition and at worst inertia. Another influence on policy will be people--but some individuals may have more influence than others. A further influence on policy in this field is events, and mainly events that have gone wrong. One final influence on emerging policy in medical education cost analysis is that of the media.
Aircraft Optimization for Minimum Environmental Impact
NASA Technical Reports Server (NTRS)
Antoine, Nicolas; Kroo, Ilan M.
2001-01-01
The objective of this research is to investigate the tradeoff between operating cost and environmental acceptability of commercial aircraft. This involves optimizing the aircraft design and mission to minimize operating cost while constraining exterior noise and emissions. Growth in air traffic and airport neighboring communities has resulted in increased pressure to severely penalize airlines that do not meet strict local noise and emissions requirements. As a result, environmental concerns have become potent driving forces in commercial aviation. Traditionally, aircraft have been first designed to meet performance and cost goals, and adjusted to satisfy the environmental requirements at given airports. The focus of the present study is to determine the feasibility of including noise and emissions constraints in the early design of the aircraft and mission. This paper introduces the design tool and results from a case study involving a 250-passenger airliner.
NASA Astrophysics Data System (ADS)
Van Zandt, James R.
2012-05-01
Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.
Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2016-08-01
We report on a comprehensive all-sky search for periodic gravitational waves in the frequency band 100-1500 Hz and with a frequency time derivative in the range of [-1.18 ,+1.00 ] ×1 0-8 Hz /s . Such a signal could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our galaxy. This search uses the data from the initial LIGO sixth science run and covers a larger parameter space with respect to any past search. A Loosely Coherent detection pipeline was applied to follow up weak outliers in both Gaussian (95% recovery rate) and non-Gaussian (75% recovery rate) bands. No gravitational wave signals were observed, and upper limits were placed on their strength. Our smallest upper limit on worst-case (linearly polarized) strain amplitude h0 is 9.7 ×1 0-25 near 169 Hz, while at the high end of our frequency range we achieve a worst-case upper limit of 5.5 ×1 0-24 . Both cases refer to all sky locations and entire range of frequency derivative values.
Zika virus in French Polynesia 2013-14: anatomy of a completed outbreak.
Musso, Didier; Bossin, Hervé; Mallet, Henri Pierre; Besnard, Marianne; Broult, Julien; Baudouin, Laure; Levi, José Eduardo; Sabino, Ester C; Ghawche, Frederic; Lanteri, Marion C; Baud, David
2018-05-01
The Zika virus crisis exemplified the risk associated with emerging pathogens and was a reminder that preparedness for the worst-case scenario, although challenging, is needed. Herein, we review all data reported during the unexpected emergence of Zika virus in French Polynesia in late 2013. We focus on the new findings reported during this outbreak, especially the first description of severe neurological complications in adults and the retrospective description of CNS malformations in neonates, the isolation of Zika virus in semen, the potential for blood-transfusion transmission, mother-to-child transmission, and the development of new diagnostic assays. We describe the effect of this outbreak on health systems, the implementation of vector-borne control strategies, and the line of communication used to alert the international community of the new risk associated with Zika virus. This outbreak highlighted the need for careful monitoring of all unexpected events that occur during an emergence, to implement surveillance and research programmes in parallel to management of cases, and to be prepared to the worst-case scenario. Copyright © 2018 Elsevier Ltd. All rights reserved.
Jones, Roy W; McCrone, Paul; Guilhaume, Chantal
2004-01-01
Clinical trials with memantine, an uncompetitive moderate-affinity NMDA antagonist, have shown improved clinical outcomes, increased independence and a trend towards delayed institutionalisation in patients with moderately severe-to-severe Alzheimer's disease. In a randomised double-blind, placebo-controlled, 28-week study conducted in the US, reductions in resource utilisation and total healthcare costs were noted with memantine relative to placebo. While these findings suggest that, compared with placebo, memantine provides cost savings, further analyses may help to quantify potential economic gains over a longer treatment period. To evaluate the cost effectiveness of memantine therapy compared with no pharmacological treatment in patients with moderately severe-to-severe Alzheimer's disease over a 2-year period. A Markov model was constructed to simulate patient progression through a series of health states related to severity, dependency (determined by patient scores on the Alzheimer's Disease Cooperative Study-Activities of Daily Living [ADCS-ADL] inventory and residential status ('institutionalisation') with a time horizon of 2 years (each 6-month Markov cycle was repeated four times). Transition probabilities from one health state to another 6 months later were mainly derived from a 28-week, randomised, double-blind, placebo-controlled clinical trial. Inputs related to epidemiological and cost data were derived from a UK longitudinal epidemiological study, while data on quality-adjusted life-years (QALYs) were derived from a Danish longitudinal study. To ensure conservative estimates from the model, the base case analysis assumed drug effectiveness was limited to 12 months. Monte Carlo simulations were performed for each state parameter following definition of a priori distributions for the main variables of the model. Sensitivity analyses included worst case scenario in which memantine was effective for 6 months and one-way sensitivity analyses on key parameters. Finally, a subgroup analysis was performed to determine which patients were most likely to benefit from memantine. Informal care was not included in this model as the costs were considered from National Health Service and Personal Social Services perspective. The base case analysis found that, compared with no treatment, memantine was associated with lower costs and greater clinical effectiveness in terms of years of independence, years in the community and QALYs. Sensitivity analyses supported these findings. For each category of Alzheimer's disease patient examined, treatment with memantine was a cost-effective strategy. The greatest economic gain of memantine treatment was in independent patients with a Mini-Mental State Examination score of > or =10. This model suggests that memantine treatment is cost effective and provides cost savings compared with no pharmacological treatment. These benefits appear to result from prolonged patient independence and delayed institutionalisation for moderately severe and severe Alzheimer's disease patients on memantine compared with no pharmacological treatment.
Brennan, Alan; Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S
2014-09-30
To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Modelling study using the Sheffield Alcohol Policy Model version 2.5. England 2014-15. Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45 p, and 50 p per unit (7.9 g/10 mL) of pure alcohol. Changes in mean consumption in terms of units of alcohol, drinkers' expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45 p minimum unit price. Below cost selling is estimated to reduce harmful drinkers' mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45 p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health-saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45 p minimum unit price is estimated to save 624 deaths and 23,700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40 p and 50 p per unit, is estimated to have an approximately 40-50 times greater effect. © Brennan et al 2014.
Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S
2014-01-01
Objective To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Design Modelling study using the Sheffield Alcohol Policy Model version 2.5. Setting England 2014-15. Population Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Interventions Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45p, and 50p per unit (7.9 g/10 mL) of pure alcohol. Main outcome measures Changes in mean consumption in terms of units of alcohol, drinkers’ expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. Results The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45p minimum unit price. Below cost selling is estimated to reduce harmful drinkers’ mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health—saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45p minimum unit price is estimated to save 624 deaths and 23 700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. Conclusions The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40p and 50p per unit, is estimated to have an approximately 40-50 times greater effect. PMID:25270743
Joseph, Wout; Pareit, Daan; Vermeeren, Günter; Naudts, Dries; Verloock, Leen; Martens, Luc; Moerman, Ingrid
2013-01-01
Wireless Local Area Networks (WLANs) are commonly deployed in various environments. The WLAN data packets are not transmitted continuously but often worst-case exposure of WLAN is assessed, assuming 100% activity and leading to huge overestimations. Actual duty cycles of WLAN are thus of importance for time-averaging of exposure when checking compliance with international guidelines on limiting adverse health effects. In this paper, duty cycles of WLAN using Wi-Fi technology are determined for exposure assessment on large scale at 179 locations for different environments and activities (file transfer, video streaming, audio, surfing on the internet, etc.). The median duty cycle equals 1.4% and the 95th percentile is 10.4% (standard deviation SD = 6.4%). Largest duty cycles are observed in urban and industrial environments. For actual applications, the theoretical upper limit for the WLAN duty cycle is 69.8% and 94.7% for maximum and minimum physical data rate, respectively. For lower data rates, higher duty cycles will occur. Although counterintuitive at first sight, poor WLAN connections result in higher possible exposures. File transfer at maximum data rate results in median duty cycles of 47.6% (SD = 16%), while it results in median values of 91.5% (SD = 18%) at minimum data rate. Surfing and audio streaming are less intensively using the wireless medium and therefore have median duty cycles lower than 3.2% (SD = 0.5-7.5%). For a specific example, overestimations up to a factor 8 for electric fields occur, when considering 100% activity compared to realistic duty cycles. Copyright © 2012 Elsevier Ltd. All rights reserved.
Shuttle payload minimum cost vibroacoustic tests
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
This paper is directed toward the development of the methodology needed to evaluate cost effective vibroacoustic test plans for Shuttle Spacelab payloads. Statistical decision theory is used to quantitatively evaluate seven alternate test plans by deriving optimum test levels and the expected cost for each multiple mission payload considered. The results indicate that minimum costs can vary by as much as $6 million for the various test plans. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level. Test plans using system testing or combinations of component and assembly level testing are attractive alternatives. Component testing alone is shown not to be cost effective.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.
DOT National Transportation Integrated Search
2013-06-01
Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...
Global climate change: The quantifiable sustainability challenge
Population growth and the pressures spawned by increasing demands for energy and resource-intensive goods, foods and services are driving unsustainable growth in greenhouse gas (GHG) emissions. Recent GHG emission trends are consistent with worst-case scenarios of the previous de...
Schefter, John E.; Hirsch, Robert M.
1980-01-01
A method for evaluating the cost-effectiveness of alternative strategies for dissolved-oxygen (DO) management is demonstrated, using the Chattahoochee River, GA., as an example. The conceptual framework for the analysis is suggested by the economic theory of production. The minimum flow of the river and the percentage of the total waste inflow receiving nitrification are considered to be two variable inputs to be used in the production of given minimum concentration of DO in the river. Each of the inputs has a cost: the loss of dependable peak hydroelectric generating capacity at Buford Dam associated with flow augmentation and the cost associated with nitrification of wastes. The least-cost combination of minimum flow and waste treatment necessary to achieve a prescribed minimum DO concentration is identified. Results of the study indicate that, in some instances, the waste-assimilation capacity of the Chattahoochee River can be substituted for increased waste treatment; the associated savings in waste-treatment costs more than offset the benefits foregone because of the loss of peak generating capacity at Buford Dam. The sensitivity of the results to the estimates of the cost of replacing peak generating capacity is examined. It is also demonstrated that a flexible approach to the management of DO in the Chattahoochee River may be much more cost effective than a more rigid, institutional approach wherein constraints are placed on the flow of the river and(or) on waste-treatment practices. (USGS)
De Boever, A L; Keersmaekers, K; Vanmaele, G; Kerschbaum, T; Theuniers, G; De Boever, J A
2006-11-01
One hundred and seventy-two fixed reconstructions (317 prosthetic units), made on 283 ITI implants in 105 patients (age range 25-86 years) with a minimum follow-up period of 40 months, were taken into the study to analyse technical complication rate, complication type and costs for repair. The mean evaluation time was 62.5 +/- 25.3 months. Eighty were single crowns and 92 different types of fixed partial dentures (FPDs). In 45 cases the construction was screw retained and in 127 cases cemented with zinc phosphate cement or an acrylic-based cement. Complications occurred after a minimum period of 2 months and a maximum period of 100 months (mean: 35.9 +/- 21.4 months). Fifty-five prosthetic interventions were needed on 44 constructions (25%) of which 88% in the molar/premolar region. The lowest percentage of complications occurred in single crowns (25%), the highest in 3-4 unit FPDs (35%) and in FPDs with an extension (44%). Of the necessary clinical repair, 36% was recementing and 38% tightening the screws. Of all interventions, 14% were classified as minor (no treatment or <10 min chair time), 70% as moderate (>10 min but <60 min chair time) and 14% as major interventions (>60 min and additional costs for replacement of parts and/or laboratory). For seven patients the additional costs ranged from euro 28 to euro 840. Bruxing seemed to play a significant role in the frequency of complications. Longer constructions seemed to be more prone to complications. The relatively high occurrence of technical complications should be discussed with the patient before the start of the treatment.
Sears, Erika Davis; Burke, James F.; Davis, Matthew M.; Chung, Kevin C.
2016-01-01
Background The purpose of this study is to 1) understand national variation in delay of emergency procedures in patients with open tibial fracture at the hospital level and 2) compare length of stay (LOS) and cost in patients cared for at the best and worst performing hospitals for delay. Methods We retrospectively analyzed the 2003 – 2009 Nationwide Inpatient Sample. Adult patients with primary diagnosis of open tibial fracture were selected for inclusion. We calculated hospital probability of delay of emergency procedures beyond the day of admission (day 0). Multilevel linear regression random effects models were created to evaluate the relationship between the treating hospital’s tendency for delay (in quartiles) and the log-transformed outcomes of LOS and cost, while adjusting for patient and hospital variables. Results The final sample included 7,029 patients from 332 hospitals. Adjusted analyses demonstrate that patients treated at hospitals in the fourth (worst) quartile for delay were estimated to have 12% (95% CI 2–21%) higher cost compared to patients treated at hospitals in the first quartile. In addition, patients treated at hospitals in the fourth quartile had an estimated 11% (CI 4–17%) longer LOS compared to patients treated at hospitals in the first quartile. Conclusions Patients with open tibial fracture treated at hospitals with more timely initiation of surgical care had lower cost and shorter LOS than patients treated at hospitals with less timely initiation of care. Policies directed toward mitigating variation in care are not only beneficial for patient outcomes, but may also reduce unnecessary waste. Level II (Prognostic) PMID:23142940
Lux, Michael P; Kraml, Florian; Wagner, Stefanie; Hack, Carolin C; Schulze, Christine; Faschingbauer, Florian; Winkler, Mathias; Fasching, Peter A; Beckmann, Matthias W; Hildebrandt, Thomas
2013-01-01
Debate is currently taking place over minimum case numbers for the care of premature infants and neonates in Germany. As a result of the Federal Joint Committee (Gemeinsamer Bundesauschuss, G-BA) guidelines for the quality of structures, processes, and results, requiring high levels of staffing resources, Level I perinatal centers are increasingly becoming the focus for health-economics questions, specifically, debating whether Level I structures are financially viable. Using a multistep contribution margin analysis, the operating results for the Obstetrics Section at the University Perinatal Center of Franconia (Universitäts-Perinatalzentrum Franken) were calculated for the year 2009. Costs arising per diagnosis-related group (DRG) (separated into variable costs and fixed costs) and the corresponding revenue generated were compared for 4,194 in-patients and neonates, as well as for 3,126 patients in the outpatient ultrasound and pregnancy clinics. With a positive operating result of € 374,874.81, a Level I perinatal center on the whole initially appears to be financially viable, from the obstetrics point of view (excluding neonatology), with a high bed occupancy rate and a profitable case mix. By contrast, the costs of prenatal diagnostics, with a negative contribution margin II of € 50,313, cannot be covered. A total of 79.4% of DRG case numbers were distributed to five DRGs, all of which were associated with pregnancies and neonates with the lowest risk profiles. A Level I perinatal center is currently capable of covering its costs. However, the cost-revenue ratio is fragile due to the high requirements for staffing resources and numerous economic, social, and regional influencing factors.
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
Middleton, Richard Stephen; Yaw, Sean Patrick
2018-01-11
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard Stephen; Yaw, Sean Patrick
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less