Sample records for worst case optimization

  1. Impact of respiratory motion on worst-case scenario optimized intensity modulated proton therapy for lung cancers.

    PubMed

    Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe

    2015-01-01

    We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  2. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  3. Optimization of vibratory energy harvesters with stochastic parametric uncertainty: a new perspective

    NASA Astrophysics Data System (ADS)

    Haji Hosseinloo, Ashkan; Turitsyn, Konstantin

    2016-04-01

    Vibration energy harvesting has been shown as a promising power source for many small-scale applications mainly because of the considerable reduction in the energy consumption of the electronics and scalability issues of the conventional batteries. However, energy harvesters may not be as robust as the conventional batteries and their performance could drastically deteriorate in the presence of uncertainty in their parameters. Hence, study of uncertainty propagation and optimization under uncertainty is essential for proper and robust performance of harvesters in practice. While all studies have focused on expectation optimization, we propose a new and more practical optimization perspective; optimization for the worst-case (minimum) power. We formulate the problem in a generic fashion and as a simple example apply it to a linear piezoelectric energy harvester. We study the effect of parametric uncertainty in its natural frequency, load resistance, and electromechanical coupling coefficient on its worst-case power and then optimize for it under different confidence levels. The results show that there is a significant improvement in the worst-case power of thus designed harvester compared to that of a naively-optimized (deterministically-optimized) harvester.

  4. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  5. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    PubMed

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. Optimal Analyses for 3×n AB Games in the Worst Case

    NASA Astrophysics Data System (ADS)

    Huang, Li-Te; Lin, Shun-Shii

    The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.

  7. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Schild, S; Bues, M

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less

  8. Query Optimization in Distributed Databases.

    DTIC Science & Technology

    1982-10-01

    general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are

  9. Stochastic Robust Mathematical Programming Model for Power System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  10. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less

  11. Conceptual modeling for identification of worst case conditions in environmental risk assessment of nanomaterials using nZVI and C60 as case studies.

    PubMed

    Grieger, Khara D; Hansen, Steffen F; Sørensen, Peter B; Baun, Anders

    2011-09-01

    Conducting environmental risk assessment of engineered nanomaterials has been an extremely challenging endeavor thus far. Moreover, recent findings from the nano-risk scientific community indicate that it is unlikely that many of these challenges will be easily resolved in the near future, especially given the vast variety and complexity of nanomaterials and their applications. As an approach to help optimize environmental risk assessments of nanomaterials, we apply the Worst-Case Definition (WCD) model to identify best estimates for worst-case conditions of environmental risks of two case studies which use engineered nanoparticles, namely nZVI in soil and groundwater remediation and C(60) in an engine oil lubricant. Results generated from this analysis may ultimately help prioritize research areas for environmental risk assessments of nZVI and C(60) in these applications as well as demonstrate the use of worst-case conditions to optimize future research efforts for other nanomaterials. Through the application of the WCD model, we find that the most probable worst-case conditions for both case studies include i) active uptake mechanisms, ii) accumulation in organisms, iii) ecotoxicological response mechanisms such as reactive oxygen species (ROS) production and cell membrane damage or disruption, iv) surface properties of nZVI and C(60), and v) acute exposure tolerance of organisms. Additional estimates of worst-case conditions for C(60) also include the physical location of C(60) in the environment from surface run-off, cellular exposure routes for heterotrophic organisms, and the presence of light to amplify adverse effects. Based on results of this analysis, we recommend the prioritization of research for the selected applications within the following areas: organism active uptake ability of nZVI and C(60) and ecotoxicological response end-points and response mechanisms including ROS production and cell membrane damage, full nanomaterial characterization taking into account detailed information on nanomaterial surface properties, and investigations of dose-response relationships for a variety of organisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Fine-Scale Structure Design for 3D Printing

    NASA Astrophysics Data System (ADS)

    Panetta, Francis Julian

    Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.

  13. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  14. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  15. Robust guaranteed-cost adaptive quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.

    2017-05-01

    Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.

  16. Coherent detection of frequency-hopped quadrature modulations in the presence of jamming. I - QPSK and QASK modulations

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.

    1981-01-01

    This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.

  17. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  18. Reliability evaluation of high-performance, low-power FinFET standard cells based on mixed RBB/FBB technique

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole

    2017-04-01

    With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).

  19. Geometrical Design of a Scalable Overlapping Planar Spiral Coil Array to Generate a Homogeneous Magnetic Field.

    PubMed

    Jow, Uei-Ming; Ghovanloo, Maysam

    2012-12-21

    We present a design methodology for an overlapping hexagonal planar spiral coil (hex-PSC) array, optimized for creation of a homogenous magnetic field for wireless power transmission to randomly moving objects. The modular hex-PSC array has been implemented in the form of three parallel conductive layers, for which an iterative optimization procedure defines the PSC geometries. Since the overlapping hex-PSCs in different layers have different characteristics, the worst case coil-coupling condition should be designed to provide the maximum power transfer efficiency (PTE) in order to minimize the spatial received power fluctuations. In the worst case, the transmitter (Tx) hex-PSC is overlapped by six PSCs and surrounded by six other adjacent PSCs. Using a receiver (Rx) coil, 20 mm in radius, at the coupling distance of 78 mm and maximum lateral misalignment of 49.1 mm (1/√3 of the PSC radius) we can receive power at a PTE of 19.6% from the worst case PSC. Furthermore, we have studied the effects of Rx coil tilting and concluded that the PTE degrades significantly when θ > 60°. Solutions are: 1) activating two adjacent overlapping hex-PSCs simultaneously with out-of-phase excitations to create horizontal magnetic flux and 2) inclusion of a small energy storage element in the Rx module to maintain power in the worst case scenarios. In order to verify the proposed design methodology, we have developed the EnerCage system, which aims to power up biological instruments attached to or implanted in freely behaving small animal subjects' bodies in long-term electrophysiology experiments within large experimental arenas.

  20. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: robust virtual sensor design.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-03-01

    The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America

  1. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  2. The effect of a loss of model structural detail due to network skeletonization on contamination warning system design: case studies.

    PubMed

    Davis, Michael J; Janke, Robert

    2018-01-04

    The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.

  3. The effect of a loss of model structural detail due to network skeletonization on contamination warning system design: case studies

    NASA Astrophysics Data System (ADS)

    Davis, Michael J.; Janke, Robert

    2018-05-01

    The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.

  4. Worst case analysis: Earth sensor assembly for the tropical rainfall measuring mission observatory

    NASA Technical Reports Server (NTRS)

    Conley, Michael P.

    1993-01-01

    This worst case analysis verifies that the TRMMESA electronic design is capable of maintaining performance requirements when subjected to worst case circuit conditions. The TRMMESA design is a proven heritage design and capable of withstanding the most worst case and adverse of circuit conditions. Changes made to the baseline DMSP design are relatively minor and do not adversely effect the worst case analysis of the TRMMESA electrical design.

  5. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  6. SU-E-T-551: PTV Is the Worst-Case of CTV in Photon Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, D; Liu, W; Park, P

    2014-06-01

    Purpose: To examine the supposition of the static dose cloud and adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution for photon therapy in head and neck (H and N) plans. Methods: Five diverse H and N plans clinically delivered at our institution were selected. Isocenter for each plan was shifted positively and negatively in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (3 mm) for a total of six shifted plans per original plan. The perturbed plan dose was recalculated inmore » Eclipse (AAA v11.0.30) using the same, fixed fluence map as the original plan. The dose distributions for all plans were exported from the treatment planning system to determine the worst-case CTV dose distributions for each nominal plan. Two worst-case distributions, cold and hot, were defined by selecting the minimum or maximum dose per voxel from all the perturbed plans. The resulting dose volume histograms (DVH) were examined to evaluate the worst-case CTV and nominal PTV dose distributions. Results: Inspection demonstrates that the CTV DVH in the nominal dose distribution is indeed bounded by the CTV DVHs in the worst-case dose distributions. Furthermore, comparison of the D95% for the worst-case (cold) CTV and nominal PTV distributions by Pearson's chi-square test shows excellent agreement for all plans. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the five plans under examination. Although the worst-case dose distributions are unphysical since the dose per voxel is chosen independently, the cold worst-case distribution serves as a lower bound for the worst-case possible CTV coverage. Minor discrepancies between the nominal PTV dose distribution and worst-case CTV dose distribution are expected since the dose cloud is not strictly static. This research was supported by the NCI through grant K25CA168984, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, and by the Fraternal Order of Eagles Cancer Research Fund, the Career Development Award Program at Mayo Clinic.« less

  7. Time Safety Margin: Theory and Practice

    DTIC Science & Technology

    2016-09-01

    Basic Dive Recovery Terminology The Simplest Definition of TSM: Time Safety Margin is the time to directly travel from the worst-case vector to an...Safety Margin (TSM). TSM is defined as the time in seconds to directly travel from the worst case vector (i.e. worst case combination of parameters...invoked by this AFI, base recovery planning and risk management upon the calculated TSM. TSM is the time in seconds to di- rectly travel from the worst case

  8. Optimal disturbance rejecting control of hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Biswas, Saroj K.; Ahmed, N. U.

    1994-01-01

    Optimal regulation of hyperbolic systems in the presence of unknown disturbances is considered. Necessary conditions for determining the optimal control that tracks a desired trajectory in the presence of the worst possible perturbations are developed. The results also characterize the worst possible disturbance that the system will be able to tolerate before any degradation of the system performance. Numerical results on the control of a vibrating beam are presented.

  9. Quantifying policy options for reducing future coronary heart disease mortality in England: a modelling study.

    PubMed

    Scholes, Shaun; Bajekal, Madhavi; Norman, Paul; O'Flaherty, Martin; Hawkins, Nathaniel; Kivimäki, Mika; Capewell, Simon; Raine, Rosalind

    2013-01-01

    To estimate the number of coronary heart disease (CHD) deaths potentially preventable in England in 2020 comparing four risk factor change scenarios. Using 2007 as baseline, the IMPACTSEC model was extended to estimate the potential number of CHD deaths preventable in England in 2020 by age, gender and Index of Multiple Deprivation 2007 quintiles given four risk factor change scenarios: (a) assuming recent trends will continue; (b) assuming optimal but feasible levels already achieved elsewhere; (c) an intermediate point, halfway between current and optimal levels; and (d) assuming plateauing or worsening levels, the worst case scenario. These four scenarios were compared to the baseline scenario with both risk factors and CHD mortality rates remaining at 2007 levels. This would result in approximately 97,000 CHD deaths in 2020. Assuming recent trends will continue would avert approximately 22,640 deaths (95% uncertainty interval: 20,390-24,980). There would be some 39,720 (37,120-41,900) fewer deaths in 2020 with optimal risk factor levels and 22,330 fewer (19,850-24,300) in the intermediate scenario. In the worst case scenario, 16,170 additional deaths (13,880-18,420) would occur. If optimal risk factor levels were achieved, the gap in CHD rates between the most and least deprived areas would halve with falls in systolic blood pressure, physical inactivity and total cholesterol providing the largest contributions to mortality gains. CHD mortality reductions of up to 45%, accompanied by significant reductions in area deprivation mortality disparities, would be possible by implementing optimal preventive policies.

  10. Quantifying Policy Options for Reducing Future Coronary Heart Disease Mortality in England: A Modelling Study

    PubMed Central

    Scholes, Shaun; Bajekal, Madhavi; Norman, Paul; O’Flaherty, Martin; Hawkins, Nathaniel; Kivimäki, Mika; Capewell, Simon; Raine, Rosalind

    2013-01-01

    Aims To estimate the number of coronary heart disease (CHD) deaths potentially preventable in England in 2020 comparing four risk factor change scenarios. Methods and Results Using 2007 as baseline, the IMPACTSEC model was extended to estimate the potential number of CHD deaths preventable in England in 2020 by age, gender and Index of Multiple Deprivation 2007 quintiles given four risk factor change scenarios: (a) assuming recent trends will continue; (b) assuming optimal but feasible levels already achieved elsewhere; (c) an intermediate point, halfway between current and optimal levels; and (d) assuming plateauing or worsening levels, the worst case scenario. These four scenarios were compared to the baseline scenario with both risk factors and CHD mortality rates remaining at 2007 levels. This would result in approximately 97,000 CHD deaths in 2020. Assuming recent trends will continue would avert approximately 22,640 deaths (95% uncertainty interval: 20,390-24,980). There would be some 39,720 (37,120-41,900) fewer deaths in 2020 with optimal risk factor levels and 22,330 fewer (19,850-24,300) in the intermediate scenario. In the worst case scenario, 16,170 additional deaths (13,880-18,420) would occur. If optimal risk factor levels were achieved, the gap in CHD rates between the most and least deprived areas would halve with falls in systolic blood pressure, physical inactivity and total cholesterol providing the largest contributions to mortality gains. Conclusions CHD mortality reductions of up to 45%, accompanied by significant reductions in area deprivation mortality disparities, would be possible by implementing optimal preventive policies. PMID:23936122

  11. Emergency Management Span of Control: Optimizing Organizational Structures to Better Prepare Vermont for the Next Major or Catastrophic Disaster

    DTIC Science & Technology

    2008-12-01

    full glare of media and public scrutiny, they are expected to perform flawlessly like a goalie in hockey or soccer, or a conversion kicker in...among all levels of government, not a plan that is pulled off the shelf only during worst- case disasters. The lifecycle of disasters entails a

  12. Synthesis of robust nonlinear autopilots using differential game theory

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1991-01-01

    A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.

  13. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  14. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  15. Specifying design conservatism: Worst case versus probabilistic analysis

    NASA Technical Reports Server (NTRS)

    Miles, Ralph F., Jr.

    1993-01-01

    Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.

  16. 30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 2 2012-07-01 2012-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...

  17. 30 CFR 253.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000 bbls but not more than... must demonstrate OSFR in accordance with the following table: COF worst case oil-spill discharge volume... applicable table in paragraph (b)(1) or (b)(2) for a facility with a potential worst case oil-spill discharge...

  18. 30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 2 2013-07-01 2013-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...

  19. 30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 2 2014-07-01 2014-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...

  20. 30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 2 2011-07-01 2011-07-01 false How do I determine the worst case oil-spill... ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To...

  1. 30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How do I determine the worst case oil-spill... INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate the amount...

  2. Lower bound for LCD image quality

    NASA Astrophysics Data System (ADS)

    Olson, William P.; Balram, Nikhil

    1996-03-01

    The paper presents an objective lower bound for the discrimination of patterns and fine detail in images on a monochrome LCD. In applications such as medical imaging and military avionics the information of interest is often at the highest frequencies in the image. Since LCDs are sampled data systems, their output modulation is dependent on the phase between the input signal and the sampling points. This phase dependence becomes particularly significant at high spatial frequencies. In order to use an LCD for applications such as those mentioned above it is essential to have a lower (worst case) bound on the performance of the display. We address this problem by providing a mathematical model for the worst case output modulation of an LCD in response to a sine wave input. This function can be interpreted as a worst case modulation transfer function (MTF). The intersection of the worst case MTF with the contrast threshold function (CTF) of the human visual system defines the highest spatial frequency that will always be detectable. In addition to providing the worst case limiting resolution, this MTF is combined with the CTF to produce objective worst case image quality values using the modulation transfer function area (MTFA) metric.

  3. Probabilistic Solar Energetic Particle Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.

    2011-01-01

    To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.

  4. Conscious worst case definition for risk assessment, part I: a knowledge mapping approach for defining most critical risk factors in integrative risk management of chemicals and nanomaterials.

    PubMed

    Sørensen, Peter B; Thomsen, Marianne; Assmuth, Timo; Grieger, Khara D; Baun, Anders

    2010-08-15

    This paper helps bridge the gap between scientists and other stakeholders in the areas of human and environmental risk management of chemicals and engineered nanomaterials. This connection is needed due to the evolution of stakeholder awareness and scientific progress related to human and environmental health which involves complex methodological demands on risk management. At the same time, the available scientific knowledge is also becoming more scattered across multiple scientific disciplines. Hence, the understanding of potentially risky situations is increasingly multifaceted, which again challenges risk assessors in terms of giving the 'right' relative priority to the multitude of contributing risk factors. A critical issue is therefore to develop procedures that can identify and evaluate worst case risk conditions which may be input to risk level predictions. Therefore, this paper suggests a conceptual modelling procedure that is able to define appropriate worst case conditions in complex risk management. The result of the analysis is an assembly of system models, denoted the Worst Case Definition (WCD) model, to set up and evaluate the conditions of multi-dimensional risk identification and risk quantification. The model can help optimize risk assessment planning by initial screening level analyses and guiding quantitative assessment in relation to knowledge needs for better decision support concerning environmental and human health protection or risk reduction. The WCD model facilitates the evaluation of fundamental uncertainty using knowledge mapping principles and techniques in a way that can improve a complete uncertainty analysis. Ultimately, the WCD is applicable for describing risk contributing factors in relation to many different types of risk management problems since it transparently and effectively handles assumptions and definitions and allows the integration of different forms of knowledge, thereby supporting the inclusion of multifaceted risk components in cumulative risk management. Copyright 2009 Elsevier B.V. All rights reserved.

  5. Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems

    NASA Astrophysics Data System (ADS)

    Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán

    2016-07-01

    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.

  6. Assessing the robustness of passive scattering proton therapy with regard to local recurrence in stage III non-small cell lung cancer: a secondary analysis of a phase II trial.

    PubMed

    Zhu, Zhengfei; Liu, Wei; Gillin, Michael; Gomez, Daniel R; Komaki, Ritsuko; Cox, James D; Mohan, Radhe; Chang, Joe Y

    2014-05-06

    We assessed the robustness of passive scattering proton therapy (PSPT) plans for patients in a phase II trial of PSPT for stage III non-small cell lung cancer (NSCLC) by using the worst-case scenario method, and compared the worst-case dose distributions with the appearance of locally recurrent lesions. Worst-case dose distributions were generated for each of 9 patients who experienced recurrence after concurrent chemotherapy and PSPT to 74 Gy(RBE) for stage III NSCLC by simulating and incorporating uncertainties associated with set-up, respiration-induced organ motion, and proton range in the planning process. The worst-case CT scans were then fused with the positron emission tomography (PET) scans to locate the recurrence. Although the volumes enclosed by the prescription isodose lines in the worst-case dose distributions were consistently smaller than enclosed volumes in the nominal plans, the target dose coverage was not significantly affected: only one patient had a recurrence outside the prescription isodose lines in the worst-case plan. PSPT is a relatively robust technique. Local recurrence was not associated with target underdosage resulting from estimated uncertainties in 8 of 9 cases.

  7. Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction

    NASA Technical Reports Server (NTRS)

    Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.

    2006-01-01

    The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.

  8. Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?

    NASA Astrophysics Data System (ADS)

    Ferrie, Chris; Blume-Kohout, Robin

    2011-03-01

    Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.

  9. Derivation and experimental verification of clock synchronization theory

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1994-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.

  10. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  11. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  12. Decision making with epistemic uncertainty under safety constraints: An application to seismic design

    USGS Publications Warehouse

    Veneziano, D.; Agarwal, A.; Karaca, E.

    2009-01-01

    The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project. ?? 2009 Elsevier Ltd. All rights reserved.

  13. Monitoring Churn in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Holzer, Stephan; Pignolet, Yvonne Anne; Smula, Jasmin; Wattenhofer, Roger

    Wireless networks often experience a significant amount of churn, the arrival and departure of nodes. In this paper we propose a distributed algorithm for single-hop networks that detects churn and is resilient to a worst-case adversary. The nodes of the network are notified about changes quickly, in asymptotically optimal time up to an additive logarithmic overhead. We establish a trade-off between saving energy and minimizing the delay until notification for single- and multi-channel networks.

  14. Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.

  15. Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  16. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI.

    PubMed

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-21

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  17. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI

    NASA Astrophysics Data System (ADS)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-01

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  18. The impact of the fast ion fluxes and thermal plasma loads on the design of the ITER fast ion loss detector

    NASA Astrophysics Data System (ADS)

    Kocan, M.; Garcia-Munoz, M.; Ayllon-Guerola, J.; Bertalot, L.; Bonnet, Y.; Casal, N.; Galdon, J.; Garcia-Lopez, J.; Giacomin, T.; Gonzalez-Martin, J.; Gunn, J. P.; Rodriguez-Ramos, M.; Reichle, R.; Rivero-Rodriguez, J. F.; Sanchis-Sanchez, L.; Vayakis, G.; Veshchev, E.; Vorpahl, C.; Walsh, M.; Walton, R.

    2017-12-01

    Thermal plasma loads to the ITER Fast Ion Loss Detector are studied for QDT = 10 burning plasma equilibrium using the 3D field line tracing. The simulations are performed for a FILD insertion 9-13 cm past the port plasma facing surface, optimized for fast ion measurements, and include the worst-case perturbation of the plasma boundary and the error in the magnetic reconstruction. The FILD head is exposed to superimposed time-averaged ELM heat load, static inter-ELM heat flux and plasma radiation. The study includes the estimate of the instantaneous temperature rise due to individual 0.6 MJ controlled ELMs. The maximum time-averaged surface heat load is lesssim 12 MW/m2 and will lead to increase of the FILD surface temperature well below the melting temperature of the materials considered here, for the FILD insertion time of 0.2 s. The worst-case instantaneous temperature rise during controlled 0.6 MJ ELMs is also significantly smaller than the melting temperature of e.g. Tungsten or Molybdenum, foreseen for the FILD housing.

  19. 49 CFR 194.5 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...

  20. 49 CFR 194.5 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...

  1. 49 CFR 194.5 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...

  2. The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.

    PubMed

    Qu, Shaojian; Ji, Ying

    2016-01-01

    In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.

  3. An SEU resistant 256K SOI SRAM

    NASA Astrophysics Data System (ADS)

    Hite, L. R.; Lu, H.; Houston, T. W.; Hurta, D. S.; Bailey, W. E.

    1992-12-01

    A novel SEU (single event upset) resistant SRAM (static random access memory) cell has been implemented in a 256K SOI (silicon on insulator) SRAM that has attractive performance characteristics over the military temperature range of -55 to +125 C. These include worst-case access time of 40 ns with an active power of only 150 mW at 25 MHz, and a worst-case minimum WRITE pulse width of 20 ns. Measured SEU performance gives an Adams 10 percent worst-case error rate of 3.4 x 10 exp -11 errors/bit-day using the CRUP code with a conservative first-upset LET threshold. Modeling does show that higher bipolar gain than that measured on a sample from the SRAM lot would produce a lower error rate. Measurements show the worst-case supply voltage for SEU to be 5.5 V. Analysis has shown this to be primarily caused by the drain voltage dependence of the beta of the SOI parasitic bipolar transistor. Based on this, SEU experiments with SOI devices should include measurements as a function of supply voltage, rather than the traditional 4.5 V, to determine the worst-case condition.

  4. Efficacy and cost-efficacy of biologic therapies for moderate to severe psoriasis: a meta-analysis and cost-efficacy analysis using the intention-to-treat principle.

    PubMed

    Chi, Ching-Chi; Wang, Shu-Hui

    2014-01-01

    Compared to conventional therapies, biologics are more effective but expensive in treating psoriasis. To evaluate the efficacy and cost-efficacy of biologic therapies for psoriasis. We conducted a meta-analysis to calculate the efficacy of etanercept, adalimumab, infliximab, and ustekinumab for at least 75% reduction in the Psoriasis Area and Severity Index score (PASI 75) and Physician's Global Assessment clear/minimal (PGA 0/1). The cost-efficacy was assessed by calculating the incremental cost-effectiveness ratio (ICER) per subject achieving PASI 75 and PGA 0/1. The incremental efficacy regarding PASI 75 was 55% (95% confidence interval (95% CI) 38%-72%), 63% (95% CI 59%-67%), 71% (95% CI 67%-76%), 67% (95% CI 62%-73%), and 72% (95% CI 68%-75%) for etanercept, adalimumab, infliximab, and ustekinumab 45 mg and 90 mg, respectively. The corresponding 6-month ICER regarding PASI 75 was $32,643 (best case $24,936; worst case $47,246), $21,315 (best case $20,043; worst case $22,760), $27,782 (best case $25,954; worst case $29,440), $25,055 (best case $22,996; worst case $27,075), and $46,630 (best case $44,765; worst case $49,373), respectively. The results regarding PGA 0/1 were similar. Infliximab and ustekinumab 90 mg had the highest efficacy. Meanwhile, adalimumab had the best cost-efficacy, followed by ustekinumab 45 mg and infliximab.

  5. A search game model of the scatter hoarder's problem

    PubMed Central

    Alpern, Steve; Fokkink, Robbert; Lidbetter, Thomas; Clayton, Nicola S.

    2012-01-01

    Scatter hoarders are animals (e.g. squirrels) who cache food (nuts) over a number of sites for later collection. A certain minimum amount of food must be recovered, possibly after pilfering by another animal, in order to survive the winter. An optimal caching strategy is one that maximizes the survival probability, given worst case behaviour of the pilferer. We modify certain ‘accumulation games’ studied by Kikuta & Ruckle (2000 J. Optim. Theory Appl.) and Kikuta & Ruckle (2001 Naval Res. Logist.), which modelled the problem of optimal diversification of resources against catastrophic loss, to include the depth at which the food is hidden at each caching site. Optimal caching strategies can then be determined as equilibria in a new ‘caching game’. We show how the distribution of food over sites and the site-depths of the optimal caching varies with the animal's survival requirements and the amount of pilfering. We show that in some cases, ‘decoy nuts’ are required to be placed above other nuts that are buried further down at the same site. Methods from the field of search games are used. Some empirically observed behaviour can be shown to be optimal in our model. PMID:22012971

  6. Robust Algorithms for Detecting a Change in a Stochastic Process with Infinite Memory

    DTIC Science & Technology

    1988-03-01

    breakdown point and the additional assumption of 0-mixing on the nominal meas- influence function . The structure of the optimal algorithm ures. Then Huber’s...are i.i.d. sequences of Gaus- For the breakdown point and the influence function sian random variables, with identical variance o2 . Let we will use...algebraic sign for i=0,1. Here z will be chosen such = f nthat it leads to worst case or earliest breakdown. i (14) Next, the influence function measures

  7. On meeting capital requirements with a chance-constrained optimization model.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  8. Robust Constrained Optimization Approach to Control Design for International Space Station Centrifuge Rotor Auto Balancing Control System

    NASA Technical Reports Server (NTRS)

    Postma, Barry Dirk

    2005-01-01

    This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.

  9. 30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... associated with the facility. In determining the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your...) For exploratory or development drilling operations, the size of your worst case discharge scenario is...

  10. Effect of pesticide fate parameters and their uncertainty on the selection of 'worst-case' scenarios of pesticide leaching to groundwater.

    PubMed

    Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry

    2011-03-01

    For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.

  11. The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions

    PubMed Central

    Qu, Shaojian; Ji, Ying

    2016-01-01

    In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512

  12. 30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your scenario must discuss how to respond to... drilling operations, the size of your worst case discharge scenario is the daily volume possible from an...

  13. Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration

    NASA Astrophysics Data System (ADS)

    Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.

    2010-12-01

    The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.

  14. A Framework to Improve Surgeon Communication in High-Stakes Surgical Decisions: Best Case/Worst Case.

    PubMed

    Taylor, Lauren J; Nabozny, Michael J; Steffens, Nicole M; Tucholka, Jennifer L; Brasel, Karen J; Johnson, Sara K; Zelenski, Amy; Rathouz, Paul J; Zhao, Qianqian; Kwekkeboom, Kristine L; Campbell, Toby C; Schwarze, Margaret L

    2017-06-01

    Although many older adults prefer to avoid burdensome interventions with limited ability to preserve their functional status, aggressive treatments, including surgery, are common near the end of life. Shared decision making is critical to achieve value-concordant treatment decisions and minimize unwanted care. However, communication in the acute inpatient setting is challenging. To evaluate the proof of concept of an intervention to teach surgeons to use the Best Case/Worst Case framework as a strategy to change surgeon communication and promote shared decision making during high-stakes surgical decisions. Our prospective pre-post study was conducted from June 2014 to August 2015, and data were analyzed using a mixed methods approach. The data were drawn from decision-making conversations between 32 older inpatients with an acute nonemergent surgical problem, 30 family members, and 25 surgeons at 1 tertiary care hospital in Madison, Wisconsin. A 2-hour training session to teach each study-enrolled surgeon to use the Best Case/Worst Case communication framework. We scored conversation transcripts using OPTION 5, an observer measure of shared decision making, and used qualitative content analysis to characterize patterns in conversation structure, description of outcomes, and deliberation over treatment alternatives. The study participants were patients aged 68 to 95 years (n = 32), 44% of whom had 5 or more comorbid conditions; family members of patients (n = 30); and surgeons (n = 17). The median OPTION 5 score improved from 41 preintervention (interquartile range, 26-66) to 74 after Best Case/Worst Case training (interquartile range, 60-81). Before training, surgeons described the patient's problem in conjunction with an operative solution, directed deliberation over options, listed discrete procedural risks, and did not integrate preferences into a treatment recommendation. After training, surgeons using Best Case/Worst Case clearly presented a choice between treatments, described a range of postoperative trajectories including functional decline, and involved patients and families in deliberation. Using the Best Case/Worst Case framework changed surgeon communication by shifting the focus of decision-making conversations from an isolated surgical problem to a discussion about treatment alternatives and outcomes. This intervention can help surgeons structure challenging conversations to promote shared decision making in the acute setting.

  15. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 28 2011-07-01 2011-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  16. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 29 2012-07-01 2012-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  17. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 29 2013-07-01 2013-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  18. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 28 2014-07-01 2014-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  19. Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 1

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    Electrical characterization and qualification tests were performed on the RCA MWS5001D, 1024 by 1-bit, CMOS, random access memory. Characterization tests were performed on five devices. The tests included functional tests, AC parametric worst case pattern selection test, determination of worst-case transition for setup and hold times and a series of schmoo plots. The qualification tests were performed on 32 devices and included a 2000 hour burn in with electrical tests performed at 0 hours and after 168, 1000, and 2000 hours of burn in. The tests performed included functional tests and AC and DC parametric tests. All of the tests in the characterization phase, with the exception of the worst-case transition test, were performed at ambient temperatures of 25, -55 and 125 C. The worst-case transition test was performed at 25 C. The preburn in electrical tests were performed at 25, -55, and 125 C. All burn in endpoint tests were performed at 25, -40, -55, 85, and 125 C.

  20. On optimal current patterns for electrical impedance tomography.

    PubMed

    Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D

    2005-02-01

    We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.

  1. New Algorithms and Lower Bounds for Sequential-Access Data Compression

    NASA Astrophysics Data System (ADS)

    Gagie, Travis

    2009-02-01

    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.

  2. Beyond Worst-Case Analysis in Privacy and Clustering: Exploiting Explicit and Implicit Assumptions

    DTIC Science & Technology

    2013-08-01

    Dwork et al [63]. Given a query function f , the curator first estimates the global sensitivity of f , denoted GS(f) = maxD,D′ f(D)− f(D′), then outputs f...Ostrovsky et al [121]. Ostrovsky et al study instances in which the ratio between the cost of the optimal (k − 1)-means solu- tion and the cost of the...k-median objective. We also build on the work of Balcan et al [25] that investigate the connection between point-wise approximations of the target

  3. Minimizing the health and climate impacts of emissions from heavy-duty public transportation bus fleets through operational optimization.

    PubMed

    Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J

    2013-04-16

    In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.

  4. Optimal and robust control of transition

    NASA Technical Reports Server (NTRS)

    Bewley, T. R.; Agarwal, R.

    1996-01-01

    Optimal and robust control theories are used to determine feedback control rules that effectively stabilize a linearly unstable flow in a plane channel. Wall transpiration (unsteady blowing/suction) with zero net mass flux is used as the control. Control algorithms are considered that depend both on full flowfield information and on estimates of that flowfield based on wall skin-friction measurements only. The development of these control algorithms accounts for modeling errors and measurement noise in a rigorous fashion; these disturbances are considered in both a structured (Gaussian) and unstructured ('worst case') sense. The performance of these algorithms is analyzed in terms of the eigenmodes of the resulting controlled systems, and the sensitivity of individual eigenmodes to both control and observation is quantified.

  5. Shortening Delivery Times of Intensity Modulated Proton Therapy by Reducing Proton Energy Layers During Treatment Plan Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl; Kooy, Hanne M.; Heijmen, Ben J.M.

    2015-06-01

    Purpose: To shorten delivery times of intensity modulated proton therapy by reducing the number of energy layers in the treatment plan. Methods and Materials: We have developed an energy layer reduction method, which was implemented into our in-house-developed multicriteria treatment planning system “Erasmus-iCycle.” The method consisted of 2 components: (1) minimizing the logarithm of the total spot weight per energy layer; and (2) iteratively excluding low-weighted energy layers. The method was benchmarked by comparing a robust “time-efficient plan” (with energy layer reduction) with a robust “standard clinical plan” (without energy layer reduction) for 5 oropharyngeal cases and 5 prostate cases.more » Both plans of each patient had equal robust plan quality, because the worst-case dose parameters of the standard clinical plan were used as dose constraints for the time-efficient plan. Worst-case robust optimization was performed, accounting for setup errors of 3 mm and range errors of 3% + 1 mm. We evaluated the number of energy layers and the expected delivery time per fraction, assuming 30 seconds per beam direction, 10 ms per spot, and 400 Giga-protons per minute. The energy switching time was varied from 0.1 to 5 seconds. Results: The number of energy layers was on average reduced by 45% (range, 30%-56%) for the oropharyngeal cases and by 28% (range, 25%-32%) for the prostate cases. When assuming 1, 2, or 5 seconds energy switching time, the average delivery time was shortened from 3.9 to 3.0 minutes (25%), 6.0 to 4.2 minutes (32%), or 12.3 to 7.7 minutes (38%) for the oropharyngeal cases, and from 3.4 to 2.9 minutes (16%), 5.2 to 4.2 minutes (20%), or 10.6 to 8.0 minutes (24%) for the prostate cases. Conclusions: Delivery times of intensity modulated proton therapy can be reduced substantially without compromising robust plan quality. Shorter delivery times are likely to reduce treatment uncertainties and costs.« less

  6. How should epistemic uncertainty in modelling water resources management problems shape evaluations of their operations?

    NASA Astrophysics Data System (ADS)

    Dobson, B.; Pianosi, F.; Reed, P. M.; Wagener, T.

    2017-12-01

    In previous work, we have found that water supply companies are typically hesitant to use reservoir operation tools to inform their release decisions. We believe that this is, in part, due to a lack of faith in the fidelity of the optimization exercise with regards to its ability to represent the real world. In an attempt to quantify this, recent literature has studied the impact on performance from uncertainty arising in: forcing (e.g. reservoir inflows), parameters (e.g. parameters for the estimation of evaporation rate) and objectives (e.g. worst first percentile or worst case). We suggest that there is also epistemic uncertainty in the choices made during model creation, for example in the formulation of an evaporation model or aggregating regional storages. We create `rival framings' (a methodology originally developed to demonstrate the impact of uncertainty arising from alternate objective formulations), each with different modelling choices, and determine their performance impacts. We identify the Pareto approximate set of policies for several candidate formulations and then make them compete with one another in a large ensemble re-evaluation in each other's modelled spaces. This enables us to distinguish the impacts of different structural changes in the model used to evaluate system performance in an effort to generalize the validity of the optimized performance expectations.

  7. Quantum communication complexity of establishing a shared reference frame.

    PubMed

    Rudolph, Terry; Grover, Lov

    2003-11-21

    We discuss the aligning of spatial reference frames from a quantum communication complexity perspective. This enables us to analyze multiple rounds of communication and give several simple examples demonstrating tradeoffs between the number of rounds and the type of communication. Using a distributed variant of a quantum computational algorithm, we give an explicit protocol for aligning spatial axes via the exchange of spin-1/2 particles which makes no use of either exchanged entangled states, or of joint measurements. This protocol achieves a worst-case fidelity for the problem of "direction finding" that is asymptotically equivalent to the optimal average case fidelity achievable via a single forward communication of entangled states.

  8. TH-CD-BRA-09: Towards Absolute Dose Measurement in MRI-Linac and Gamma-Knife: Design and Construction of An MR-Compatible Water Calorimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Entezari, N; Sarfehnia, A; Renaud, J

    Purpose: The purpose of this work is to design and optimize a portable Water Calorimeter (WC) for use in a commercial MRI-linac and Gamma-knife in addition to conventional radiotherapy linacs. Water calorimeters determine absorbed dose to water at a point by measuring radiation-induced temperature rise of the volume (the two are related by the medium specific heat capacity). In this formalism, one important correction factor is heat transfer correction k-ht. It compensates for heat gain/loss due to conductive and convective effects, and is numerically calculated as ratio of temperature rise in the absence of heat loss to that in themore » presence of heat loss. Operating at 4°C ensures convection is minimal. Methods: A commercial finite element software was used to evaluate several WC designs with different insulation materials and thicknesses; channels allowing coolant to travel around WC (to sustain WC at 4°C) were modeled, and worst-case scenario variation in the temperature of the coolant was simulated for optimization purposes (2.6 mK/s). Additionally, several calorimeter vessel design parameters (front/back glass thickness/separation, diameter) were also simulated and optimized. Optimization is based on minimizing long term calorimeter drift (24h) as well as variation and magnitude of k-ht. Results: The final selected WC design reached a modest drift of 11µK/s after 15h for the worst-case coolant temperature variation. This design consists of coolant channels being encompassed on both sides by cryogel insulation. For the MRI-linac beam, glass thickness plays the largest effect on k-ht with variation of upto 0.6% in the first run for thicknesses ranging between 0.5–1.7mm. Subsequent runs vary only within 0.1% with glass thickness. Other factors such as vessel radius and top/bottom glass separation have sub 0.1% effects on k-ht. Conclusion: An MR-safe 4°C stagnant WC appropriate for dosimetry in MRI-linac and Gamma-Knife was designed, optimized, and construction is nearly completed. NSERC Discovery Grant RGPIN-435608.« less

  9. Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.

    PubMed

    Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David

    2014-01-01

    We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.

  10. Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects

    ERIC Educational Resources Information Center

    Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor

    2011-01-01

    Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…

  11. 30 CFR 553.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...

  12. 30 CFR 553.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...

  13. 30 CFR 553.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...

  14. Accounting for range uncertainties in the optimization of intensity modulated proton therapy.

    PubMed

    Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas

    2007-05-21

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.

  15. Data-Driven Zero-Sum Neuro-Optimal Control for a Class of Continuous-Time Unknown Nonlinear Systems With Disturbance Using ADP.

    PubMed

    Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei

    2016-02-01

    This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.

  16. Performance of Optimized Actuator and Sensor Arrays in an Active Noise Control System

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Padula, S. L.; Lyle, K. H.; Cline, J. H.; Cabell, R. H.

    1996-01-01

    Experiments have been conducted in NASA Langley's Acoustics and Dynamics Laboratory to determine the effectiveness of optimized actuator/sensor architectures and controller algorithms for active control of harmonic interior noise. Tests were conducted in a large scale fuselage model - a composite cylinder which simulates a commuter class aircraft fuselage with three sections of trim panel and a floor. Using an optimization technique based on the component transfer functions, combinations of 4 out of 8 piezoceramic actuators and 8 out of 462 microphone locations were evaluated against predicted performance. A combinatorial optimization technique called tabu search was employed to select the optimum transducer arrays. Three test frequencies represent the cases of a strong acoustic and strong structural response, a weak acoustic and strong structural response and a strong acoustic and weak structural response. Noise reduction was obtained using a Time Averaged/Gradient Descent (TAGD) controller. Results indicate that the optimization technique successfully predicted best and worst case performance. An enhancement of the TAGD control algorithm was also evaluated. The principal components of the actuator/sensor transfer functions were used in the PC-TAGD controller. The principal components are shown to be independent of each other while providing control as effective as the standard TAGD.

  17. Parallel transmission pulse design with explicit control for the specific absorption rate in the presence of radiofrequency errors.

    PubMed

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien

    2016-06-01

    A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  18. Identification of Swallowing Tasks From a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment

    PubMed Central

    Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-01-01

    Purpose The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Results Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. Conclusions The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS. PMID:28614846

  19. Identification of Swallowing Tasks From a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment.

    PubMed

    Hazelwood, R Jordan; Armeson, Kent E; Hill, Elizabeth G; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-07-12

    The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS.

  20. 41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...

  1. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  2. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  3. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  4. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  5. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  6. 41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...

  7. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  8. Adaptive Power Control for Space Communications

    NASA Technical Reports Server (NTRS)

    Thompson, Willie L., II; Israel, David J.

    2008-01-01

    This paper investigates the implementation of power control techniques for crosslinks communications during a rendezvous scenario of the Crew Exploration Vehicle (CEV) and the Lunar Surface Access Module (LSAM). During the rendezvous, NASA requires that the CEV supports two communication links: space-to-ground and crosslink simultaneously. The crosslink will generate excess interference to the space-to-ground link as the distances between the two vehicles decreases, if the output power is fixed and optimized for the worst-case link analysis at the maximum distance range. As a result, power control is required to maintain the optimal power level for the crosslink without interfering with the space-to-ground link. A proof-of-concept will be described and implemented with Goddard Space Flight Center (GSFC) Communications, Standard, and Technology Lab (CSTL).

  9. Concept Development and Analysis of the Environmental Control, Chemical Protection, and Power Generation Systems for the Battalion Aid Station and Division Clearing Station

    DTIC Science & Technology

    1986-03-31

    requirements necessary to optimize BAS/DCS operation in worst case environments . 4) Identify the qualitative and quantitative values of equipment which... Defibrillator 2.3 2.3 265 1.0 265 2 Sink unit, surgici1 17.0 34.0 3910 0.1 390 1 Resuscitator - inhaler 0.9 0.9 104 0.5 52 2 Sterilizer, surgical 10.1...transferred, the driving force for transfer is the difference in dry bulb temperatures. During heat transfer between unsaturated air and a wetted

  10. Optimized PID control of depth of hypnosis in anesthesia.

    PubMed

    Padula, Fabrizio; Ionescu, Clara; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio; Vivacqua, Giulio

    2017-06-01

    This paper addresses the use of proportional-integral-derivative controllers for regulating the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. In fact, introducing an automatic control system might provide significant benefits for the patient in reducing the risk for under- and over-dosing. In this study, the controller parameters are obtained through genetic algorithms by solving a min-max optimization problem. A set of 12 patient models representative of a large population variance is used to test controller robustness. The worst-case performance in the considered population is minimized considering two different scenarios: the induction case and the maintenance case. Our results indicate that including a gain scheduling strategy enables optimal performance for induction and maintenance phases, separately. Using a single tuning to address both tasks may results in a loss of performance up to 102% in the induction phase and up to 31% in the maintenance phase. Further on, it is shown that a suitably designed low-pass filter on the controller output can handle the trade-off between the performance and the noise effect in the control variable. Optimally tuned PID controllers provide a fast induction time with an acceptable overshoot and a satisfactory disturbance rejection performance during maintenance. These features make them a very good tool for comparison when other control algorithms are developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  12. Mitigating energy loss on distribution lines through the allocation of reactors

    NASA Astrophysics Data System (ADS)

    Miranda, T. M.; Romero, F.; Meffe, A.; Castilho Neto, J.; Abe, L. F. T.; Corradi, F. E.

    2018-03-01

    This paper presents a methodology for automatic reactors allocation on medium voltage distribution lines to reduce energy loss. In Brazil, some feeders are distinguished by their long lengths and very low load, which results in a high influence of the capacitance of the line on the circuit’s performance, requiring compensation through the installation of reactors. The automatic allocation is accomplished using an optimization meta-heuristic called Global Neighbourhood Algorithm. Given a set of reactor models and a circuit, it outputs an optimal solution in terms of reduction of energy loss. The algorithm is also able to verify if the voltage limits determined by the user are not being violated, besides checking for energy quality. The methodology was implemented in a software tool, which can also show the allocation graphically. A simulation with four real feeders is presented in the paper. The obtained results were able to reduce the energy loss significantly, from 50.56%, in the worst case, to 93.10%, in the best case.

  13. Assessing oral bioaccessibility of trace elements in soils under worst-case scenarios by automated in-line dynamic extraction as a front end to inductively coupled plasma atomic emission spectrometry.

    PubMed

    Rosende, María; Magalhães, Luis M; Segundo, Marcela A; Miró, Manuel

    2014-09-09

    A novel biomimetic extraction procedure that allows for the in-line handing of ≥400 mg solid substrates is herein proposed for automatic ascertainment of trace element (TE) bioaccessibility in soils under worst-case conditions as per recommendations of ISO norms. A unified bioaccessibility/BARGE method (UBM)-like physiological-based extraction test is evaluated for the first time in a dynamic format for accurate assessment of in-vitro bioaccessibility of Cr, Cu, Ni, Pb and Zn in forest and residential-garden soils by on-line coupling of a hybrid flow set-up to inductively coupled plasma atomic emission spectrometry. Three biologically relevant operational extraction modes mimicking: (i) gastric juice extraction alone; (ii) saliva and gastric juice composite in unidirectional flow extraction format and (iii) saliva and gastric juice composite in a recirculation mode were thoroughly investigated. The extraction profiles of the three configurations using digestive fluids were proven to fit a first order reaction kinetic model for estimating the maximum TE bioaccessibility, that is, the actual worst-case scenario in human risk assessment protocols. A full factorial design, in which the sample amount (400-800 mg), the extractant flow rate (0.5-1.5 mL min(-1)) and the extraction temperature (27-37°C) were selected as variables for the multivariate optimization studies in order to obtain the maximum TE extractability. Two soils of varied physicochemical properties were analysed and no significant differences were found at the 0.05 significance level between the summation of leached concentrations of TE in gastric juice plus the residual fraction and the total concentration of the overall assayed metals determined by microwave digestion. These results showed the reliability and lack of bias (trueness) of the automatic biomimetic extraction approach using digestive juices. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation

    NASA Astrophysics Data System (ADS)

    Choi, J.; Raguin, L. G.

    2010-10-01

    Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.

  15. Less than severe worst case accidents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, G.A.

    1996-08-01

    Many systems can provide tremendous benefit if operating correctly, produce only an inconvenience if they fail to operate, but have extreme consequences if they are only partially disabled such that they operate erratically or prematurely. In order to assure safety, systems are often tested against the most severe environments and accidents that are considered possible to ensure either safe operation or safe failure. However, it is often the less severe environments which result in the ``worst case accident`` since these are the conditions in which part of the system may be exposed or rendered unpredictable prior to total system failure.more » Some examples of less severe mechanical, thermal, and electrical environments which may actually be worst case are described as cautions for others in industries with high consequence operations or products.« less

  16. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  17. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, L; Department of Industrial Engineering, University of Houston, Houston, TX; Yu, J

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used tomore » evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.« less

  18. Self-compensating design for reduction of timing and leakage sensitivity to systematic pattern dependent variation

    NASA Astrophysics Data System (ADS)

    Gupta, Puneet; Kahng, Andrew B.; Kim, Youngmin; Sylvester, Dennis

    2006-03-01

    Focus is one of the major sources of linewidth variation. CD variation caused by defocus is largely systematic after the layout is finished. In particular, dense lines "smile" through focus while isolated lines "frown" in typical Bossung plots. This well-defined systematic behavior of focus-dependent CD variation allows us to develop a self-compensating design methodology. In this work, we propose a novel design methodology that allows explicit compensation of focus-dependent CD variation, either within a cell (self-compensated cells) or across cells in a critical path (self-compensated design). By creating iso and dense variants for each library cell, we can achieve designs that are more robust to focus variation. Optimization with a mixture of iso and dense cell variants is possible both for area and leakage power, with the latter providing an interesting complement to existing leakage reduction techniques such as dual-Vth. We implement both heuristic and Mixed-Integer Linear Programming (MILP) solution methods to address this optimization, and experimentally compare their results. Our results indicate that designing with a self-compensated cell library incurs ~12% area penalty and ~6% leakage increase over original layouts while compensating for focus-dependent CD variation (i.e., the design meets timing constraints across a large range of focus variation). We observe ~27% area penalty and ~7% leakage increase at the worst-case defocus condition using only single-pitch cells. The area penalty of circuits after using either the heuristic or MILP optimization approach is reduced to ~3% while maintaining timing. We also apply our optimizations to leakage, which traditionally shows very large variability due to its exponential relationship with gate CD. We conclude that a mixed iso/dense library combined with a sensitivity-based optimization approach yields much better area/timing/leakage tradeoffs than using a self-compensated cell library alone. Self-compensated design shows an average of 25% leakage reduction at the worst defocus condition for the benchmark designs that we have studied.

  19. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    DTIC Science & Technology

    2017-05-08

    hybrid photonic torus, the all-optical Corona crossbar, and the hybrid hierarchical Firefly crossbar. • The key challenges for waveguide photonics...improves SXR but with relatively higher EDP overhead. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and...photonic crossbar architectures ( Corona and Firefly) indicate that our approach improves worst-case signal-to-noise ratio (SNR) by up to 51.7

  20. Feasibility and robustness of dose painting by numbers in proton therapy with contour-driven plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barragán, A. M., E-mail: ana.barragan@uclouvain.be; Differding, S.; Lee, J. A.

    Purpose: To prove the ability of protons to reproduce a dose gradient that matches a dose painting by numbers (DPBN) prescription in the presence of setup and range errors, by using contours and structure-based optimization in a commercial treatment planning system. Methods: For two patients with head and neck cancer, voxel-by-voxel prescription to the target volume (GTV{sub PET}) was calculated from {sup 18}FDG-PET images and approximated with several discrete prescription subcontours. Treatments were planned with proton pencil beam scanning. In order to determine the optimal plan parameters to approach the DPBN prescription, the effects of the scanning pattern, number ofmore » fields, number of subcontours, and use of range shifter were separately tested on each patient. Different constant scanning grids (i.e., spot spacing = Δx = Δy = 3.5, 4, and 5 mm) and uniform energy layer separation [4 and 5 mm WED (water equivalent distance)] were analyzed versus a dynamic and automatic selection of the spots grid. The number of subcontours was increased from 3 to 11 while the number of beams was set to 3, 5, or 7. Conventional PTV-based and robust clinical target volumes (CTV)-based optimization strategies were considered and their robustness against range and setup errors assessed. Because of the nonuniform prescription, ensuring robustness for coverage of GTV{sub PET} inevitably leads to overdosing, which was compared for both optimization schemes. Results: The optimal number of subcontours ranged from 5 to 7 for both patients. All considered scanning grids achieved accurate dose painting (1% average difference between the prescribed and planned doses). PTV-based plans led to nonrobust target coverage while robust-optimized plans improved it considerably (differences between worst-case CTV dose and the clinical constraint was up to 3 Gy for PTV-based plans and did not exceed 1 Gy for robust CTV-based plans). Also, only 15% of the points in the GTV{sub PET} (worst case) were above 5% of DPBN prescription for robust-optimized plans, while they were more than 50% for PTV plans. Low dose to organs at risk (OARs) could be achieved for both PTV and robust-optimized plans. Conclusions: DPBN in proton therapy is feasible with the use of a sufficient number subcontours, automatically generated scanning patterns, and no more than three beams are needed. Robust optimization ensured the required target coverage and minimal overdosing, while PTV-approach led to nonrobust plans with excessive overdose. Low dose to OARs can be achieved even in the presence of a high-dose escalation as in DPBN.« less

  1. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  2. Method of Generating Transient Equivalent Sink and Test Target Temperatures for Swift BAT

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.

    2004-01-01

    The NASA Swift mission has a 600-km altitude and a 22 degrees maximum inclination. The sun angle varies from 45 degrees to 180 degrees in normal operation. As a result, environmental heat fluxes absorbed by the Burst Alert Telescope (BAT) radiator and loop heat pipe (LHP) compensation chambers (CCs) vary transiently. Therefore the equivalent sink temperatures for the radiator and CCs varies transiently. In thermal performance verification testing in vacuum, the radiator and CCs radiated heat to sink targets. This paper presents an analytical technique for generating orbit transient equivalent sink temperatures and a technique for generating transient sink target temperatures for the radiator and LHP CCs. Using these techniques, transient target temperatures for the radiator and LHP CCs were generated for three thermal environmental cases: worst hot case, worst cold case, and cooldown and warmup between worst hot case in sunlight and worst cold case in the eclipse, and three different heat transport values: 128 W, 255 W, and 382 W. The 128 W case assumed that the two LHPs transport 255 W equally to the radiator. The 255 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator. The 382 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator, and has a 50% design margin. All these transient target temperatures were successfully implemented in the engineering test unit (ETU) LHP and flight LHP thermal performance verification tests in vacuum.

  3. Defense Small Business Innovation Research Program (SBIR), Volume 4, Defense Agencies Abstracts of Phase 1 Awards 1991

    DTIC Science & Technology

    1991-01-01

    EXPERIENCE IN DEVELOPING INTEGRATED OPTICAL DEVICES, NONLINEAR MAGNETIC-OPTIC MATERIALS, HIGH FREQUENCY MODULATORS, COMPUTER-AIDED MODELING AND SOPHISTICATED... HIGH -LEVEL PRESENTATION AND DISTRIBUTED CONTROL MODELS FOR INTEGRATING HETEROGENEOUS MECHANICAL ENGINEERING APPLICATIONS AND TOOLS. THE DESIGN IS FOCUSED...STATISTICALLY ACCURATE WORST CASE DEVICE MODELS FOR CIRCUIT SIMULATION. PRESENT METHODS OF WORST CASE DEVICE DESIGN ARE AD HOC AND DO NOT ALLOW THE

  4. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  5. Local measles vaccination gaps in Germany and the role of vaccination providers.

    PubMed

    Eichner, Linda; Wjst, Stephanie; Brockmann, Stefan O; Wolfers, Kerstin; Eichner, Martin

    2017-08-14

    Measles elimination in Europe is an urgent public health goal, yet despite the efforts of its member states, vaccination gaps and outbreaks occur. This study explores local vaccination heterogeneity in kindergartens and municipalities of a German county. Data on children from mandatory school enrolment examinations in 2014/15 in Reutlingen county were used. Children with unknown vaccination status were either removed from the analysis (best case) or assumed to be unvaccinated (worst case). Vaccination data were translated into expected outbreak probabilities. Physicians and kindergartens with statistically outstanding numbers of under-vaccinated children were identified. A total of 170 (7.1%) of 2388 children did not provide a vaccination certificate; 88.3% (worst case) or 95.1% (best case) were vaccinated at least once against measles. Based on the worst case vaccination coverage, <10% of municipalities and <20% of kindergartens were sufficiently vaccinated to be protected against outbreaks. Excluding children without a vaccination certificate (best case) leads to over-optimistic views: the overall outbreak probability in case of a measles introduction lies between 39.5% (best case) and 73.0% (worst case). Four paediatricians were identified who accounted for 41 of 109 unvaccinated children and for 47 of 138 incomplete vaccinations; GPs showed significantly higher rates of missing vaccination certificates and unvaccinated or under-vaccinated children than paediatricians. Missing vaccination certificates pose a severe problem regarding the interpretability of vaccination data. Although the coverage for at least one measles vaccination is higher in the studied county than in most South German counties and higher than the European average, many severe and potentially dangerous vaccination gaps occur locally. If other federal German states and EU countries show similar vaccination variability, measles elimination may not succeed in Europe.

  6. Worst case estimation of homology design by convex analysis

    NASA Technical Reports Server (NTRS)

    Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.

    1998-01-01

    The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.

  7. Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios

    NASA Astrophysics Data System (ADS)

    Kluess, D.; Mittelmeier, W.; Bader, R.

    2009-12-01

    In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.

  8. Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios

    NASA Astrophysics Data System (ADS)

    Kluess, D.; Mittelmeier, W.; Bader, R.

    2010-03-01

    In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.

  9. Migration of mineral oil from party plates of recycled paperboard into foods: 1. Is recycled paperboard fit for the purpose? 2. Adequate testing procedure.

    PubMed

    Dima, Giovanna; Verzera, Antonella; Grob, Koni

    2011-11-01

    Party plates made of recycled paperboard with a polyolefin film on the food contact surface (more often polypropylene than polyethylene) were tested for migration of mineral oil into various foods applying reasonable worst case conditions. The worst case was identified as a slice of fried meat placed onto the plate while hot and allowed to cool for 1 h. As it caused the acceptable daily intake (ADI) specified by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) to be exceeded, it is concluded that recycled paperboard is generally acceptable for party plates only when separated from the food by a functional barrier. Migration data obtained with oil as simulant at 70°C was compared to the migration into foods. A contact time of 30 min was found to reasonably cover the worst case determined in food.

  10. Sensitivity of worst-case strom surge considering influence of climate change

    NASA Astrophysics Data System (ADS)

    Takayabu, Izuru; Hibino, Kenshi; Sasaki, Hidetaka; Shiogama, Hideo; Mori, Nobuhito; Shibutani, Yoko; Takemi, Tetsuya

    2016-04-01

    There are two standpoints when assessing risk caused by climate change. One is how to prevent disaster. For this purpose, we get probabilistic information of meteorological elements, from enough number of ensemble simulations. Another one is to consider disaster mitigation. For this purpose, we have to use very high resolution sophisticated model to represent a worst case event in detail. If we could use enough computer resources to drive many ensemble runs with very high resolution model, we can handle these all themes in one time. However resources are unfortunately limited in most cases, and we have to select the resolution or the number of simulations if we design the experiment. Applying PGWD (Pseudo Global Warming Downscaling) method is one solution to analyze a worst case event in detail. Here we introduce an example to find climate change influence on the worst case storm-surge, by applying PGWD to a super typhoon Haiyan (Takayabu et al, 2015). 1 km grid WRF model could represent both the intensity and structure of a super typhoon. By adopting PGWD method, we can only estimate the influence of climate change on the development process of the Typhoon. Instead, the changes in genesis could not be estimated. Finally, we drove SU-WAT model (which includes shallow water equation model) to get the signal of storm surge height. The result indicates that the height of the storm surge increased up to 20% owing to these 150 years climate change.

  11. Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 4, Appendix C

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Statistical analysis data is supplied along with write pulse width, read cycle time, write cycle time, and chip enable time data.

  12. Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.

  13. Factor analysis in optimization of formulation of high content uniformity tablets containing low dose active substance.

    PubMed

    Lukášová, Ivana; Muselík, Jan; Franc, Aleš; Goněc, Roman; Mika, Filip; Vetchý, David

    2017-11-15

    Warfarin is intensively discussed drug with narrow therapeutic range. There have been cases of bleeding attributed to varying content or altered quality of the active substance. Factor analysis is useful for finding suitable technological parameters leading to high content uniformity of tablets containing low amount of active substance. The composition of tabletting blend and technological procedure were set with respect to factor analysis of previously published results. The correctness of set parameters was checked by manufacturing and evaluation of tablets containing 1-10mg of warfarin sodium. The robustness of suggested technology was checked by using "worst case scenario" and statistical evaluation of European Pharmacopoeia (EP) content uniformity limits with respect to Bergum division and process capability index (Cpk). To evaluate the quality of active substance and tablets, dissolution method was developed (water; EP apparatus II; 25rpm), allowing for statistical comparison of dissolution profiles. Obtained results prove the suitability of factor analysis to optimize the composition with respect to batches manufactured previously and thus the use of metaanalysis under industrial conditions is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Probabilistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  15. A Multidimensional Assessment of Children in Conflictual Contexts: The Case of Kenya

    ERIC Educational Resources Information Center

    Okech, Jane E. Atieno

    2012-01-01

    Children in Kenya's Kisumu District Primary Schools (N = 430) completed three measures of trauma. Respondents completed the "My Worst Experience Scale" (MWES; Hyman and Snook 2002) and its supplement, the "School Alienation and Trauma Survey" (SATS; Hyman and Snook 2002), sharing their worst experiences overall and specifically…

  16. Worst-error analysis of batch filter and sequential filter in navigation problems. [in spacecraft trajectory estimation

    NASA Technical Reports Server (NTRS)

    Nishimura, T.

    1975-01-01

    This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.

  17. Area- and energy-efficient CORDIC accelerators in deep sub-micron CMOS technologies

    NASA Astrophysics Data System (ADS)

    Vishnoi, U.; Noll, T. G.

    2012-09-01

    The COordinate Rotate DIgital Computer (CORDIC) algorithm is a well known versatile approach and is widely applied in today's SoCs for especially but not restricted to digital communications. Dedicated CORDIC blocks can be implemented in deep sub-micron CMOS technologies at very low area and energy costs and are attractive to be used as hardware accelerators for Application Specific Instruction Processors (ASIPs). Thereby, overcoming the well known energy vs. flexibility conflict. Optimizing Global Navigation Satellite System (GNSS) receivers to reduce the hardware complexity is an important research topic at present. In such receivers CORDIC accelerators can be used for digital baseband processing (fixed-point) and in Position-Velocity-Time estimation (floating-point). A micro architecture well suited to such applications is presented. This architecture is parameterized according to the wordlengths as well as the number of iterations and can be easily extended for floating point data format. Moreover, area can be traded for throughput by partially or even fully unrolling the iterations, whereby the degree of pipelining is organized with one CORDIC iteration per cycle. From the architectural description, the macro layout can be generated fully automatically using an in-house datapath generator tool. Since the adders and shifters play an important role in optimizing the CORDIC block, they must be carefully optimized for high area and energy efficiency in the underlying technology. So, for this purpose carry-select adders and logarithmic shifters have been chosen. Device dimensioning was automatically optimized with respect to dynamic and static power, area and performance using the in-house tool. The fully sequential CORDIC block for fixed-point digital baseband processing features a wordlength of 16 bits, requires 5232 transistors, which is implemented in a 40-nm CMOS technology and occupies a silicon area of 1560 μm2 only. Maximum clock frequency from circuit simulation of extracted netlist is 768 MHz under typical, and 463 MHz under worst case technology and application corner conditions, respectively. Simulated dynamic power dissipation is 0.24 uW MHz-1 at 0.9 V; static power is 38 uW in slow corner, 65 uW in typical corner and 518 uW in fast corner, respectively. The latter can be reduced by 43% in a 40-nm CMOS technology using 0.5 V reverse-backbias. These features are compared with the results from different design styles as well as with an implementation in 28-nm CMOS technology. It is interesting that in the latter case area scales as expected, but worst case performance and energy do not scale well anymore.

  18. Design and Analysis of the Warm-To Suspension Links for Jefferson Lab's 11 Gev/c Super High Momentum Spectrometer

    NASA Astrophysics Data System (ADS)

    Sun, E.; Brindza, P.; Lassiter, S.; Fowler, M.

    2010-04-01

    This paper describes design and analysis performed for the warm-to-cold suspension links of the warm iron yoke superconducting quadrupole magnets, and superconducting dipole magnet. The results of investigation of titanium Ti-6Al-4V and Nitronic 50 stainless steel for the suspension links to support the cold mass, preloads, forces due to cryogenic temperature, and imbalanced magnetic forces from misalignments are presented. Allowable stresses at normal-case scenarios and worst-case scenarios, space constraints, and heat leak considerations are discussed. Principles of the ASME Pressure Vessel Code were used to determine allowable stresses. Optimal angles of the suspension links were obtained by calculation and finite element methods. The stress levels of suspension links at multiple scenarios are presented, discussed, and compared with the allowable stresses.

  19. Development of a Test Facility for Air Revitalization Technology Evaluation

    NASA Technical Reports Server (NTRS)

    Lu, Sao-Dung; Lin, Amy; Campbell, Melissa; Smith, Frederick

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  20. Gain-Scheduled Fault Tolerance Control Under False Identification

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine (Technical Monitor)

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  1. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765

  2. Experimental measurement of preferences in health and healthcare using best-worst scaling: an overview.

    PubMed

    Mühlbacher, Axel C; Kaczynski, Anika; Zweifel, Peter; Johnson, F Reed

    2016-12-01

    Best-worst scaling (BWS), also known as maximum-difference scaling, is a multiattribute approach to measuring preferences. BWS aims at the analysis of preferences regarding a set of attributes, their levels or alternatives. It is a stated-preference method based on the assumption that respondents are capable of making judgments regarding the best and the worst (or the most and least important, respectively) out of three or more elements of a choice-set. As is true of discrete choice experiments (DCE) generally, BWS avoids the known weaknesses of rating and ranking scales while holding the promise of generating additional information by making respondents choose twice, namely the best as well as the worst criteria. A systematic literature review found 53 BWS applications in health and healthcare. This article expounds possibilities of application, the underlying theoretical concepts and the implementation of BWS in its three variants: 'object case', 'profile case', 'multiprofile case'. This paper contains a survey of BWS methods and revolves around study design, experimental design, and data analysis. Moreover the article discusses the strengths and weaknesses of the three types of BWS distinguished and offered an outlook. A companion paper focuses on special issues of theory and statistical inference confronting BWS in preference measurement.

  3. Topology optimization of two-dimensional elastic wave barriers

    NASA Astrophysics Data System (ADS)

    Van hoorickx, C.; Sigmund, O.; Schevenels, M.; Lazarov, B. S.; Lombaert, G.

    2016-08-01

    Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is inserted into a design domain situated between the source and the receiver to minimize wave transmission. At low frequencies, the stiffened material reflects and guides waves away from the surface. At high frequencies, destructive interference is obtained that leads to high values of the insertion loss. To handle harmonic sources at a frequency in a given range, a uniform reduction of the response over a frequency range is pursued. The minimal insertion loss over the frequency range of interest is maximized. The resulting design contains features at depth leading to a reduction of the insertion loss at the lowest frequencies and features close to the surface leading to a reduction at the highest frequencies. For broadband sources, the average insertion loss in a frequency range is optimized. This leads to designs that especially reduce the response at high frequencies. The designs optimized for the frequency averaged insertion loss are found to be sensitive to geometric imperfections. In order to obtain a robust design, a worst case approach is followed.

  4. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  5. Effect of Impact Location on the Response of Shuttle Wing Leading Edge Panel 9

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Spellman, Regina L.; Hardy, Robin C.; Fasanella, Edwin L.; Jackson, Karen E.

    2005-01-01

    The objective of this paper is to compare the results of several simulations performed to determine the worst-case location for a foam impact on the Space Shuttle wing leading edge. The simulations were performed using the commercial non-linear transient dynamic finite element code, LS-DYNA. These simulations represent the first in a series of parametric studies performed to support the selection of the worst-case impact scenario. Panel 9 was selected for this study to enable comparisons with previous simulations performed during the Columbia Accident Investigation. The projectile for this study is a 5.5-in cube of typical external tank foam weighing 0.23 lb. Seven locations spanning the panel surface were impacted with the foam cube. For each of these cases, the foam was traveling at 1000 ft/s directly aft, along the orbiter X-axis. Results compared from the parametric studies included strains, contact forces, and material energies for various simulations. The results show that the worst case impact location was on the top surface, near the apex.

  6. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses

    PubMed Central

    Faith, Daniel P.

    2015-01-01

    The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672

  7. Space Environment Effects: Model for Emission of Solar Protons (ESP): Cumulative and Worst Case Event Fluences

    NASA Technical Reports Server (NTRS)

    Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, E. A.; Gee, G. B.

    1999-01-01

    The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.

  8. Space Environment Effects: Model for Emission of Solar Protons (ESP)--Cumulative and Worst-Case Event Fluences

    NASA Technical Reports Server (NTRS)

    Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, Edward A.; Gee, G. B.

    1999-01-01

    The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.

  9. Physical and composition characteristics of clinical secretions compared with test soils used for validation of flexible endoscope cleaning.

    PubMed

    Alfa, M J; Olson, N

    2016-05-01

    To determine which simulated-use test soils met the worst-case organic levels and viscosity of clinical secretions, and had the best adhesive characteristics. Levels of protein, carbohydrate and haemoglobin, and vibrational viscosity of clinical endoscope secretions were compared with test soils including ATS, ATS2015, Edinburgh, Edinburgh-M (modified), Miles, 10% serum and coagulated whole blood. ASTM D3359 was used for adhesion testing. Cleaning of a single-channel flexible intubation endoscope was tested after simulated use. The worst-case levels of protein, carbohydrate and haemoglobin, and viscosity of clinical material were 219,828μg/mL, 9296μg/mL, 9562μg/mL and 6cP, respectively. Whole blood, ATS2015 and Edinburgh-M were pipettable with viscosities of 3.4cP, 9.0cP and 11.9cP, respectively. ATS2015 and Edinburgh-M best matched the worst-case clinical parameters, but ATS had the best adhesion with 7% removal (36.7% for Edinburgh-M). Edinburgh-M and ATS2015 showed similar soiling and removal characteristics from the surface and lumen of a flexible intubation endoscope. Of the test soils evaluated, ATS2015 and Edinburgh-M were found to be good choices for the simulated use of endoscopes, as their composition and viscosity most closely matched worst-case clinical material. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Level II scour analysis for Bridge 38 (CONCTH00060038) on Town Highway 6, crossing the Moose River, Concord, Vermont

    USGS Publications Warehouse

    Olson, Scott A.

    1996-01-01

    Contraction scour for all modelled flows ranged from 0.1 to 3.1 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour at the left abutment ranged from 10.4 to 12.5 ft with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 25.3 to 27.3 ft with the worst-case occurring at the incipient-overtopping discharge. The worst-case total scour also occurred at the incipient-overtopping discharge. The incipient-overtopping discharge was in between the 100- and 500-year discharges. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  11. Identification of Swallowing Tasks from a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment

    ERIC Educational Resources Information Center

    Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-01-01

    Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…

  12. Scoring best-worst data in unbalanced many-item designs, with applications to crowdsourcing semantic judgments.

    PubMed

    Hollis, Geoff

    2018-04-01

    Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.

  13. Finite Energy and Bounded Actuator Attacks on Cyber-Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M

    As control system networks are being connected to enterprise level networks for remote monitoring, operation, and system-wide performance optimization, these same connections are providing vulnerabilities that can be exploited by malicious actors for attack, financial gain, and theft of intellectual property. Much effort in cyber-physical system (CPS) protection has focused on protecting the borders of the system through traditional information security techniques. Less effort has been applied to the protection of cyber-physical systems from intelligent attacks launched after an attacker has defeated the information security protections to gain access to the control system. In this paper, attacks on actuator signalsmore » are analyzed from a system theoretic context. The threat surface is classified into finite energy and bounded attacks. These two broad classes encompass a large range of potential attacks. The effect of theses attacks on a linear quadratic (LQ) control are analyzed, and the optimal actuator attacks for both finite and infinite horizon LQ control are derived, therefore the worst case attack signals are obtained. The closed-loop system under the optimal attack signals is given and a numerical example illustrating the effect of an optimal bounded attack is provided.« less

  14. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  15. Optimal wireless receiver structure for omnidirectional inductive power transmission to biomedical implants.

    PubMed

    Gougheri, Hesam Sadeghi; Kiani, Mehdi

    2016-08-01

    In order to achieve omnidirectional inductive power transmission to biomedical implants, the use of several orthogonal coils in the receiver side (Rx) has been proposed in the past. In this paper, the optimal Rx structure for connecting three orthogonal Rx coils and the power management is found to achieve the maximum power delivered to the load (PDL) in the presence of any Rx coil tilting. Unlike previous works, in which a separate power management has been used for each coil to deliver power to the load, different resonant Rx structures for connecting three Rx coils to a single power management are studied. In simulations, connecting three Rx coils with the diameters of 3 mm, 3.3 mm, and 3.6 mm in series and resonating them with a single capacitor at the operation frequency of 100 MHz led to the maximum PDL for large loads when the implant was tilted for 45o. This optimal Rx structure achieves higher PDL in worst-case scenarios as well as reduces the number of power managements to only one.

  16. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  17. SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montero, A Barragan; Sterpin, E; Lee, J

    Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less

  18. A Framework for Multi-Stakeholder Decision-Making and ...

    EPA Pesticide Factsheets

    This contribution describes the implementation of the conditional-value-at-risk (CVaR) metric to create a general multi-stakeholder decision-making framework. It is observed that stakeholder dissatisfactions (distance to their individual ideal solutions) can be interpreted as random variables. We thus shape the dissatisfaction distribution and find an optimal compromise solution by solving a CVaR minimization problem parameterized in the probability level. This enables us to generalize multi-stakeholder settings previously proposed in the literature that minimizes average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework. We demonstrate the framework in a bio-waste processing facility location case study, where we seek compromise solutions (facility locations) that balance stakeholder priorities on transportation, safety, water quality, and capital costs. This conference presentation abstract explains a new decision-making framework that computes compromise solution alternatives (reach consensus) by mitigating dissatisfactions among stakeholders as needed for SHC Decision Science and Support Tools project.

  19. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    PubMed

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD Anderson Cancer Center, and MD Anderson’s cancer center support grant CA016672. © 2012 American Association of Physicists in Medicine.

  20. How can health systems research reach the worst-off? A conceptual exploration.

    PubMed

    Pratt, Bridget; Hyder, Adnan A

    2016-11-15

    Health systems research is increasingly being conducted in low and middle-income countries (LMICs). Such research should aim to reduce health disparities between and within countries as a matter of global justice. For such research to do so, ethical guidance that is consistent with egalitarian theories of social justice proposes it ought to (amongst other things) focus on worst-off countries and research populations. Yet who constitutes the worst-off is not well-defined. By applying existing work on disadvantage from political philosophy, the paper demonstrates that (at least) two options exist for how to define the worst-off upon whom equity-oriented health systems research should focus: those who are worst-off in terms of health or those who are systematically disadvantaged. The paper describes in detail how both concepts can be understood and what metrics can be relied upon to identify worst-off countries and research populations at the sub-national level (groups, communities). To demonstrate how each can be used, the paper considers two real-world cases of health systems research and whether their choice of country (Uganda, India) and research population in 2011 would have been classified as amongst the worst-off according to the proposed concepts. The two proposed concepts can classify different countries and sub-national populations as worst-off. It is recommended that health researchers (or other actors) should use the concept that best reflects their moral commitments-namely, to perform research focused on reducing health inequalities or systematic disadvantage more broadly. If addressing the latter, it is recommended that they rely on the multidimensional poverty approach rather than the income approach to identify worst-off populations.

  1. Coverage-based constraints for IMRT optimization

    NASA Astrophysics Data System (ADS)

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  2. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses.

    PubMed

    Faith, Daniel P

    2015-02-19

    The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  3. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    PubMed

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  4. Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems

    PubMed Central

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133

  5. Approximation algorithms for a genetic diagnostics problem.

    PubMed

    Kosaraju, S R; Schäffer, A A; Biesecker, L G

    1998-01-01

    We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.

  6. Launch vehicle design and GNC sizing with ASTOS

    NASA Astrophysics Data System (ADS)

    Cremaschi, Francesco; Winter, Sebastian; Rossi, Valerio; Wiegand, Andreas

    2018-03-01

    The European Space Agency (ESA) is currently involved in several activities related to launch vehicle designs (Future Launcher Preparatory Program, Ariane 6, VEGA evolutions, etc.). Within these activities, ESA has identified the importance of developing a simulation infrastructure capable of supporting the multi-disciplinary design and preliminary guidance navigation and control (GNC) design of different launch vehicle configurations. Astos Solutions has developed the multi-disciplinary optimization and launcher GNC simulation and sizing tool (LGSST) under ESA contract. The functionality is integrated in the Analysis, Simulation and Trajectory Optimization Software for space applications (ASTOS) and is intended to be used from the early design phases up to phase B1 activities. ASTOS shall enable the user to perform detailed vehicle design tasks and assessment of GNC systems, covering all aspects of rapid configuration and scenario management, sizing of stages, trajectory-dependent estimation of structural masses, rigid and flexible body dynamics, navigation, guidance and control, worst case analysis, launch safety analysis, performance analysis, and reporting.

  7. Robust predictive control with optimal load tracking for critical applications. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tse, J.; Bentsman, J.; Miller, N.

    1994-09-01

    This report derives a multi-input multi-output (MIMO) version of a two-degree-of-freedom receding-horizon control law based on mixed H{sub 2}/H{infinity} minimization. First, the integrand in the frequency domain representation of the MIMO performance criterion is decomposed into disturbance and reference spectra. Then the controller is derived which minimizes the peak of the disturbance spectrum and the integral of the reference spectrum on the unit circle. The resulting two-degree-of-freedom MIMO control strategy, referred to as the minimax predictive multivariable control (MPC), is shown to have worst-case-disturbance-rejection and robust-stability properties superior to those of purely H{sub 2}-optimal controllers, such as Generalized Predictive Controlmore » (GPC), for identical horizons. An attractive feature of the receding horizon structure of MPC is that it can, in ways similar to GPC, directly incorporate input constraints and pre-programmed reference inputs, which are nontrivial tasks in the standard H{infinity} design.« less

  8. Guidelines for Developing Spacecraft Structural Requirements: A Thermal and Environmental Perspective

    NASA Technical Reports Server (NTRS)

    Holladay, Jon; Day, Greg; Gill, Larry

    2004-01-01

    Spacecraft are typically designed with a primary focus on weight in order to meet launch vehicle performance parameters. However, for pressurized and/or man-rated spacecraft, it is also necessary to have an understanding of the vehicle operating environments to properly size the pressure vessel. Proper sizing of the pressure vessel requires an understanding of the space vehicle's life cycle and compares the physical design optimization (weight and launch "cost") to downstream operational complexity and total life cycle cost. This paper will provide an overview of some major environmental design drivers and provide examples for calculating the optimal design pressure versus a selected set of design parameters related to thermal and environmental perspectives. In addition, this paper will provide a generic set of cracking pressures for both positive and negative pressure relief valves that encompasses worst case environmental effects for a variety of launch / landing sites. Finally, several examples are included to highlight pressure relief set points and vehicle weight impacts for a selected set of orbital missions.

  9. Severe anaemia associated with Plasmodium falciparum infection in children: consequences for additional blood sampling for research.

    PubMed

    Kuijpers, Laura Maria Francisca; Maltha, Jessica; Guiraud, Issa; Kaboré, Bérenger; Lompo, Palpouguini; Devlieger, Hugo; Van Geet, Chris; Tinto, Halidou; Jacobs, Jan

    2016-06-02

    Plasmodium falciparum infection may cause severe anaemia, particularly in children. When planning a diagnostic study on children suspected of severe malaria in sub-Saharan Africa, it was questioned how much blood could be safely sampled; intended blood volumes (blood cultures and EDTA blood) were 6 mL (children aged <6 years) and 10 mL (6-12 years). A previous review [Bull World Health Organ. 89: 46-53. 2011] recommended not to exceed 3.8 % of total blood volume (TBV). In a simulation exercise using data of children previously enrolled in a study about severe malaria and bacteraemia in Burkina Faso, the impact of this 3.8 % safety guideline was evaluated. For a total of 666 children aged >2 months to <12 years, data of age, weight and haemoglobin value (Hb) were available. For each child, the estimated TBV (TBVe) (mL) was calculated by multiplying the body weight (kg) by the factor 80 (ml/kg). Next, TBVe was corrected for the degree of anaemia to obtain the functional TBV (TBVf). The correction factor consisted of the rate 'Hb of the child divided by the reference Hb'; both the lowest ('best case') and highest ('worst case') reference Hb values were used. Next, the exact volume that a 3.8 % proportion of this TBVf would present was calculated and this volume was compared to the blood volumes that were intended to be sampled. When applied to the Burkina Faso cohort, the simulation exercise pointed out that in 5.3 % (best case) and 11.4 % (worst case) of children the blood volume intended to be sampled would exceed the volume as defined by the 3.8 % safety guideline. Highest proportions would be in the age groups 2-6 months (19.0 %; worst scenario) and 6 months-2 years (15.7 %; worst case scenario). A positive rapid diagnostic test for P. falciparum was associated with an increased risk of violating the safety guideline in the worst case scenario (p = 0.016). Blood sampling in children for research in P. falciparum endemic settings may easily violate the proposed safety guideline when applied to TBVf. Ethical committees and researchers should be wary of this and take appropriate precautions.

  10. Discussions On Worst-Case Test Condition For Single Event Burnout

    NASA Astrophysics Data System (ADS)

    Liu, Sandra; Zafrani, Max; Sherman, Phillip

    2011-10-01

    This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.

  11. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  12. Atmospheric transport of radioactive debris to Norway in case of a hypothetical accident related to the recovery of the Russian submarine K-27.

    PubMed

    Bartnicki, Jerzy; Amundsen, Ingar; Brown, Justin; Hosseini, Ali; Hov, Øystein; Haakenstad, Hilde; Klein, Heiko; Lind, Ole Christian; Salbu, Brit; Szacinski Wendel, Cato C; Ytre-Eide, Martin Album

    2016-01-01

    The Russian nuclear submarine K-27 suffered a loss of coolant accident in 1968 and with nuclear fuel in both reactors it was scuttled in 1981 in the outer part of Stepovogo Bay located on the eastern coast of Novaya Zemlya. The inventory of spent nuclear fuel on board the submarine is of concern because it represents a potential source of radioactive contamination of the Kara Sea and a criticality accident with potential for long-range atmospheric transport of radioactive particles cannot be ruled out. To address these concerns and to provide a better basis for evaluating possible radiological impacts of potential releases in case a salvage operation is initiated, we assessed the atmospheric transport of radionuclides and deposition in Norway from a hypothetical criticality accident on board the K-27. To achieve this, a long term (33 years) meteorological database has been prepared and used for selection of the worst case meteorological scenarios for each of three selected locations of the potential accident. Next, the dispersion model SNAP was run with the source term for the worst-case accident scenario and selected meteorological scenarios. The results showed predictions to be very sensitive to the estimation of the source term for the worst-case accident and especially to the sizes and densities of released radioactive particles. The results indicated that a large area of Norway could be affected, but that the deposition in Northern Norway would be considerably higher than in other areas of the country. The simulations showed that deposition from the worst-case scenario of a hypothetical K-27 accident would be at least two orders of magnitude lower than the deposition observed in Norway following the Chernobyl accident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots.

    PubMed

    Busch, Martin H J; Vollmann, Wolfgang; Grönemeyer, Dietrich H W

    2006-05-26

    Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach (1/4) of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q V(ind) < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q V(ind) > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.

  14. Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots

    PubMed Central

    Busch, Martin HJ; Vollmann, Wolfgang; Grönemeyer, Dietrich HW

    2006-01-01

    Background Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach ¼ of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. Methods First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. Results The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. Conclusion The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q Vind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for. PMID:16729878

  15. Failed State 2030: Nigeria - A Case Study

    DTIC Science & Technology

    2011-02-01

    disastrous ecological conditions in its Niger Delta region, and is fighting one of the modern world?s worst legacies of political and economic corruption. A ...world’s worst legacies of political and economic corruption. A nation with more than 350 ethnic groups, 250 languages, and three distinct religious...happening in the world. The discus- sion herein is a mix of cultural sociology, political science, econom - ics, military science (sometimes called

  16. JPL Thermal Design Modeling Philosophy and NASA-STD-7009 Standard for Models and Simulations - A Case Study

    NASA Technical Reports Server (NTRS)

    Avila, Arturo

    2011-01-01

    The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.

  17. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  18. Acquisition Management for System-of-Systems: Requirement Evolution and Acquisition Strategy Planning

    DTIC Science & Technology

    2012-04-30

    DoD SERC Aeronautics & Astronautics 5/16/2012 NPS 9th Annual Acquisition Research Symposium...0.6 0.7 0.8 0.9 1 0 60 120 180 240 300 360 420 480 540 600 Pr ob ab ili ty to c om pl et e a m is si on Time (mins) architecture 1 architecture 2...1 6 11 /1 6 12 /1 6 13 /1 6 14 /1 6 15 /1 6 1Pr ob ab ili ty to c om pl et e a m is si on % of system failures worst-case in arch1 worst-case in

  19. Filter distortion effects on telemetry signal-to-noise ratio

    NASA Technical Reports Server (NTRS)

    Sadr, R.; Hurd, W.

    1987-01-01

    The effect of filtering on the Signal-to-Noise Ratio (SNR) of a coherently demodulated band-limited signal is determined in the presence of worse-case amplitude ripple. The problem is formulated mathematically as an optimization problem in the L2-Hilbert space. The form of the worst-cast amplitude ripple is specified, and the degradation in the SNR is derived in a closed form expression. It is shown that when the maximum passband amplitude ripple is 2 delta (peak to peak), the SNR is degraded by at most (1 - delta squared), even when the ripple is unknown or uncompensated. For example, an SNR loss of less than 0.01 dB due to amplitude ripple can be assured by keeping the amplitude ripple to under 0.42 dB.

  20. Worst case encoder-decoder policies for a communication system in the presence of an unknown probabilistic jammer

    NASA Astrophysics Data System (ADS)

    Cascio, David M.

    1988-05-01

    States of nature or observed data are often stochastically modelled as Gaussian random variables. At times it is desirable to transmit this information from a source to a destination with minimal distortion. Complicating this objective is the possible presence of an adversary attempting to disrupt this communication. In this report, solutions are provided to a class of minimax and maximin decision problems, which involve the transmission of a Gaussian random variable over a communications channel corrupted by both additive Gaussian noise and probabilistic jamming noise. The jamming noise is termed probabilistic in the sense that with nonzero probability 1-P, the jamming noise is prevented from corrupting the channel. We shall seek to obtain optimal linear encoder-decoder policies which minimize given quadratic distortion measures.

  1. SU-F-R-39: Effects of Radiation Dose Reduction On Renal Cell Carcinoma Discrimination Using Multi-Phasic CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahi-Anwar, M; Young, S; Lo, P

    Purpose: A method to discriminate different types of renal cell carcinoma (RCC) was developed using attenuation values observed in multiphasic contrast-enhanced CT. This work evaluates the sensitivity of this RCC discrimination task at different CT radiation dose levels. Methods: We selected 5 cases of kidney lesion patients who had undergone four-phase CT scans covering the abdomen to the lilac crest. Through an IRB-approved study, the scans were conducted on 64-slice CT scanners (Definition AS/Definition Flash, Siemens Healthcare) using automatic tube-current modulation (TCM). The protocol included an initial baseline unenhanced scan, followed by three post-contrast injection phases. CTDIvol (32 cm phantom)more » measured between 9 to 35 mGy for any given phase. As a preliminary study, we limited the scope to the cortico-medullary phase—shown previously to be the most discriminative phase. A previously validated method was used to simulate a reduced dose acquisition via adding noise to raw CT sinogram data, emulating corresponding images at simulated doses of 50%, 25%, and 10%. To discriminate the lesion subtype, ROIs were placed in the most enhancing region of the lesion. The mean HU value of an ROI was extracted and used to discriminate to the worst-case RCC subtype, ranked in the order of clear cell, papillary, chromophobe and the benign oncocytoma. Results: Two patients exhibited a change of worst case RCC subtype between original and simulated scans, at 25% and 10% doses. In one case, the worst-case RCC subtype changed from oncocytoma to chromophobe at 10% and 25% doses, while the other case changed from oncocytoma to clear cell at 10% dose. Conclusion: Based on preliminary results from an initial cohort of 5 patients, worst-case RCC subtypes remained constant at all simulated dose levels except for 2 patients. Further study conducted on more patients will be needed to confirm our findings. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant Support from: U01 CA181156.« less

  2. Influence of robust optimization in intensity-modulated proton therapy with different dose delivery techniques

    PubMed Central

    Liu, Wei; Li, Yupeng; Li, Xiaoqiang; Cao, Wenhua; Zhang, Xiaodong

    2012-01-01

    Purpose: The distal edge tracking (DET) technique in intensity-modulated proton therapy (IMPT) allows for high energy efficiency, fast and simple delivery, and simple inverse treatment planning; however, it is highly sensitive to uncertainties. In this study, the authors explored the application of DET in IMPT (IMPT-DET) and conducted robust optimization of IMPT-DET to see if the planning technique’s sensitivity to uncertainties was reduced. They also compared conventional and robust optimization of IMPT-DET with three-dimensional IMPT (IMPT-3D) to gain understanding about how plan robustness is achieved. Methods: They compared the robustness of IMPT-DET and IMPT-3D plans to uncertainties by analyzing plans created for a typical prostate cancer case and a base of skull (BOS) cancer case (using data for patients who had undergone proton therapy at our institution). Spots with the highest and second highest energy layers were chosen so that the Bragg peak would be at the distal edge of the targets in IMPT-DET using 36 equally spaced angle beams; in IMPT-3D, 3 beams with angles chosen by a beam angle optimization algorithm were planned. Dose contributions for a number of range and setup uncertainties were calculated, and a worst-case robust optimization was performed. A robust quantification technique was used to evaluate the plans’ sensitivity to uncertainties. Results: With no uncertainties considered, the DET is less robust to uncertainties than is the 3D method but offers better normal tissue protection. With robust optimization to account for range and setup uncertainties, robust optimization can improve the robustness of IMPT plans to uncertainties; however, our findings show the extent of improvement varies. Conclusions: IMPT’s sensitivity to uncertainties can be improved by using robust optimization. They found two possible mechanisms that made improvements possible: (1) a localized single-field uniform dose distribution (LSFUD) mechanism, in which the optimization algorithm attempts to produce a single-field uniform dose distribution while minimizing the patching field as much as possible; and (2) perturbed dose distribution, which follows the change in anatomical geometry. Multiple-instance optimization has more knowledge of the influence matrices; this greater knowledge improves IMPT plans’ ability to retain robustness despite the presence of uncertainties. PMID:22755694

  3. Design optimization for active twist rotor blades

    NASA Astrophysics Data System (ADS)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to explore the nonlinear design space of complex planform. Especially for this case, detailed design is carried out to make the actual blade manufacturable. The proposed optimization framework is shown to be an effective tool to design high authority active twist blades to reduce vibration in future helicopter rotor blades.

  4. Cost-efficacy of biologic therapies for psoriatic arthritis from the perspective of the Taiwanese healthcare system.

    PubMed

    Yang, Tsong-Shing; Chi, Ching-Chi; Wang, Shu-Hui; Lin, Jing-Chi; Lin, Ko-Ming

    2016-10-01

    Biologic therapies are more effective but more costly than conventional therapies in treating psoriatic arthritis. To evaluate the cost-efficacy of etanercept, adalimumab and golimumab therapies in treating active psoriatic arthritis in a Taiwanese setting. We conducted a meta-analysis of randomized placebo-controlled trials to calculate the incremental efficacy of etanercept, adalimumab and golimumab, respectively, in achieving Psoriatic Arthritis Response Criteria (PsARC) and a 20% improvement in the American College of Rheumatology score (ACR20). The base, best, and worst case incremental cost-effectiveness ratios (ICERs) for one subject to achieve PsARC and ACR20 were calculated. The annual ICER per PsARC responder were US$27 047 (best scenario US$16 619; worst scenario US$31 350), US$39 339 (best scenario US$31 846; worst scenario US$53 501) and US$27 085 (best scenario US$22 716; worst scenario US$33 534) for etanercept, adalimumab and golimumab, respectively. The annual ICER per ACR20 responder were US$27 588 (best scenario US$20 900; worst scenario US$41 800), US$39 339 (best scenario US$25 236; worst scenario US$83 595) and US$33 534 (best scenario US$27 616; worst scenario US$44 013) for etanercept, adalimumab and golimumab, respectively. In a Taiwanese setting, etanercept had the lowest annual costs per PsARC and ACR20 responder, while adalimumab had the highest annual costs per PsARC and ACR responder. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.

  5. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  6. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  7. Numerical simulations in the development of propellant management devices

    NASA Astrophysics Data System (ADS)

    Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael

    Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.

  8. Emotional States of Athletes Prior to Performance-Induced Injury

    PubMed Central

    Devonport, Tracey J.; Lane, Andrew M.; Hanin, Yuri L.

    2005-01-01

    Psychological states experienced by athletes prior to injured, best and worst performances were investigated retrospectively using a mixed methodology. Fifty-nine athletes volunteered to complete an individualized assessment of performance states based on the Individual Zones of Optimal fFunctioning (IZOF) model. A subsection (n = 30) of participants completed a standardized psychometric scale (Brunel Mood Rating Scale: BRUMS), retrospectively describing how they felt before best, worst, and injured performances. IZOF results showed similar emotion states being identified for injured and best performances. Analysis of BRUMS scores indicated a significant main effect for differences in mood by performance outcome, with post-hoc analyses showing best performance was associated with lower scores on depression and fatigue and higher vigor than injured performance and worst performance. Worst performance was associated with higher fatigue and confusion than injured performance. Results indicate that retrospective emotional profiles before injured performance are closer to successful performance, than unsuccessful, and confirm differences between successful and unsuccessful performance. Qualitative and quantitative approaches used to retrospectively assess pre-performance emotional states before three performance outcomes, produced complimentary findings. Practical implications of the study are discussed. Key Points Psychological states experienced by athletes prior to injured, best and worst performances were investigated retrospectively using a mixed methodology. Results indicate that retrospective emotional profiles before injured performance are closer to successful performance, than unsuccessful, and confirm differences between successful and unsuccessful performance, a finding that occurred using both methods. Future research should further examine the emotional antecedents of injury and that applied sport psychologists recognize the potential risk of injury associated with emotional profiles typically linked with best performance. PMID:24501552

  9. Medical Optimization Network for Space Telemedicine Resources

    NASA Technical Reports Server (NTRS)

    Shah, R. V.; Mulcahy, R.; Rubin, D.; Antonsen, E. L.; Kerstman, E. L.; Reyes, D.

    2017-01-01

    INTRODUCTION: Long-duration missions beyond low Earth orbit introduce new constraints to the space medical system such as the inability to evacuate to Earth, communication delays, and limitations in clinical skillsets. NASA recognizes the need to improve capabilities for autonomous care on such missions. As the medical system is developed, it is important to have an ability to evaluate the trade space of what resources will be most important. The Medical Optimization Network for Space Telemedicine Resources was developed for this reason, and is now a system to gauge the relative importance of medical resources in addressing medical conditions. METHODS: A list of medical conditions of potential concern for an exploration mission was referenced from the Integrated Medical Model, a probabilistic model designed to quantify in-flight medical risk. The diagnostic and treatment modalities required to address best and worst-case scenarios of each medical condition, at the terrestrial standard of care, were entered into a database. This list included tangible assets (e.g. medications) and intangible assets (e.g. clinical skills to perform a procedure). A team of physicians working within the Exploration Medical Capability Element of NASA's Human Research Program ranked each of the items listed according to its criticality. Data was then obtained from the IMM for the probability of occurrence of the medical conditions, including a breakdown of best case and worst case, during a Mars reference mission. The probability of occurrence information and criticality for each resource were taken into account during analytics performed using Tableau software. RESULTS: A database and weighting system to evaluate all the diagnostic and treatment modalities was created by combining the probability of condition occurrence data with the criticalities assigned by the physician team. DISCUSSION: Exploration Medical Capabilities research at NASA is focused on providing a medical system to support crew medical needs in the context of a Mars mission. MONSTR is a novel approach to performing a quantitative risk analysis that will assess the relative value of individual resources needed for the diagnosis and treatment of various medical conditions. It will provide the operational and research communities at NASA with information to support informed decisions regarding areas of research investment, future crew training, and medical supplies manifested as part of the exploration medical system.

  10. Worst-Case Cooperative Jamming for Secure Communications in CIoT Networks.

    PubMed

    Li, Zhen; Jing, Tao; Ma, Liran; Huo, Yan; Qian, Jin

    2016-03-07

    The Internet of Things (IoT) is a significant branch of the ongoing advances in the Internet and mobile communications. The use of a large number of IoT devices makes the spectrum scarcity problem even more serious. The usable spectrum resources are almost entirely occupied, and thus, the increasing radio access demands of IoT devices cannot be met. To tackle this problem, the Cognitive Internet of Things (CIoT) has been proposed. In a CIoT network, secondary users, i.e., sensors and actuators, can access the licensed spectrum bands provided by licensed primary users (such as telephones). Security is a major concern in CIoT networks. However, the traditional encryption method at upper layers (such as symmetric cryptography and asymmetric cryptography) may be compromised in CIoT networks, since these types of networks are heterogeneous. In this paper, we address the security issue in spectrum-leasing-based CIoT networks using physical layer methods. Considering that the CIoT networks are cooperative networks, we propose to employ cooperative jamming to achieve secrecy transmission. In the cooperative jamming scheme, a certain secondary user is employed as the helper to harvest energy transmitted by the source and then uses the harvested energy to generate an artificial noise that jams the eavesdropper without interfering with the legitimate receivers. The goal is to minimize the signal to interference plus noise ratio (SINR) at the eavesdropper subject to the quality of service (QoS) constraints of the primary traffic and the secondary traffic. We formulate the considered minimization problem into a two-stage robust optimization problem based on the worst-case Channel State Information of the Eavesdropper. By using semi-definite programming (SDP), the optimal solutions of the transmit covariance matrices can be obtained. Moreover, in order to build an incentive mechanism for the secondary users, we propose an auction framework based on the cooperative jamming scheme. The proposed auction framework jointly formulates the helper selection and the corresponding energy allocation problems under the constraint of the eavesdropper's SINR. By adopting the Vickrey auction, truthfulness and individual rationality can be guaranteed. Simulation results demonstrate the good performance of the cooperative jamming scheme and the auction framework.

  11. A Worst-Case Approach for On-Line Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Lind, Rick C.; Brenner, Martin J.

    1998-01-01

    Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.

  12. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.

    PubMed

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.

  13. [Analysis on accessibility of urban park green space: the case study of Shenyang Tiexi District].

    PubMed

    Lu, Ning; Li, Jun-Ying; Yan, Hong-Wei; Shi, Tuo; Li, Ying

    2014-10-01

    The accessibility of urban park green space is an important indicator to reflect how much the natural service supplied by parks could be enjoyed by citizens conveniently and fairly. This paper took Shenyang Tiexi District as an example to evaluate the accessibility of urban park green space based on QuickBird imagery and GIS software, with four modes of transportation, walking, non-motor vehicle, motor vehicle and public transport being considered. The research compared and analyzed the distribution of the accessible area and the accessible people of park green space. The result demonstrated that park green space in Shenyang Tiexi District was not enough and the distribution was not even. To be precise, the accessibility in southwest part and central part was relatively good, that in marginal sites was worse, and that in east part and north part was the worst. Furthermore, the accessibility based on different modes of transportation varied a lot. The accessibility of motor vehicle was the best, followed by non-motor vehicle and public transport, and walking was the worst. Most of the regions could be reached within 30 minutes by walking, 15 minutes by non-motor vehicle and public transport, and 10 minutes by motor vehicle. This paper had a realistic significance in terms of further, systematic research on the green space spatial pattern optimization.

  14. Intelligent and robust optimization frameworks for smart grids

    NASA Astrophysics Data System (ADS)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.

  15. NASA's Planned Return to the Moon: Global Access and Anytime Return Requirement Implications on the Lunar Orbit Insertion Burns

    NASA Technical Reports Server (NTRS)

    Garn, Michelle; Qu, Min; Chrone, Jonathan; Su, Philip; Karlgaard, Chris

    2008-01-01

    Lunar orbit insertion LOI is a critical maneuver for any mission going to the Moon. Optimizing the geometry of this maneuver is crucial to the success of the architecture designed to return humans to the Moon. LOI burns necessary to meet current NASA Exploration Constellation architecture requirements for the lunar sortie missions are driven mainly by the requirement for global access and "anytime" return from the lunar surface. This paper begins by describing the Earth-Moon geometry which creates the worst case (delta)V for both the LOI and the translunar injection (TLI) maneuvers over the full metonic cycle. The trajectory which optimizes the overall (delta)V performance of the mission is identified, trade studies results covering the entire lunar globe are mapped onto the contour plots, and the effects of loitering in low lunar orbit as a means of reducing the insertion (delta)V are described. Finally, the lighting conditions on the lunar surface are combined with the LOI and TLI analyses to identify geometries with ideal lighting conditions at sites of interest which minimize the mission (delta)V.

  16. Design of an integrated thermoelectric generator power converter for ultra-low power and low voltage body energy harvesters aimed at ExG active electrodes

    NASA Astrophysics Data System (ADS)

    Ataei, Milad; Robert, Christian; Boegli, Alexis; Farine, Pierre-André

    2015-10-01

    This paper describes a detailed design procedure for an efficient thermal body energy harvesting integrated power converter. The procedure is based on the examination of power loss and power transfer in a converter for a self-powered medical device. The efficiency limit for the system is derived and the converter is optimized for the worst case scenario. All optimum system parameters are calculated respecting the transducer constraints and the application form factor. Circuit blocks including pulse generators are implemented based on the system specifications and optimized converter working frequency. At this working condition, it has been demonstrated that the wide area capacitor of the voltage doubler, which provides high voltage switch gating, can be eliminated at the expense of wider switches. With this method, measurements show that 54% efficiency is achieved for just a 20 mV transducer output voltage and 30% of the chip area is saved. The entire electronic board can fit in one EEG or ECG electrode, and the electronic system can convert the electrode to an active electrode.

  17. The Subset Sum game.

    PubMed

    Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim

    2014-03-16

    In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.

  18. Improving the Critic Learning for Event-Based Nonlinear $H_{\\infty }$ Control Design.

    PubMed

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    In this paper, we aim at improving the critic learning criterion to cope with the event-based nonlinear H ∞ state feedback control design. First of all, the H ∞ control problem is regarded as a two-player zero-sum game and the adaptive critic mechanism is used to achieve the minimax optimization under event-based environment. Then, based on an improved updating rule, the event-based optimal control law and the time-based worst-case disturbance law are obtained approximately by training a single critic neural network. The initial stabilizing control is no longer required during the implementation process of the new algorithm. Next, the closed-loop system is formulated as an impulsive model and its stability issue is handled by incorporating the improved learning criterion. The infamous Zeno behavior of the present event-based design is also avoided through theoretical analysis on the lower bound of the minimal intersample time. Finally, the applications to an aircraft dynamics and a robot arm plant are carried out to verify the efficient performance of the present novel design method.

  19. Tolerance allocation for an electronic system using neural network/Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque

    2001-12-01

    The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.

  20. The Subset Sum game☆

    PubMed Central

    Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim

    2014-01-01

    In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012

  1. On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H_{\\infty}$ Control.

    PubMed

    Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen

    2018-04-01

    In this paper, based on the adaptive critic learning technique, the control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.

  2. Level II scour analysis for Bridge 120 (LEICUS00070120) on U.S. Route 7, crossing the Leicester River, Leicester, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.; Severance, Timothy

    1997-01-01

    Contraction scour for all modelled flows ranged from 3.8 to 6.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.0 to 6.7 ft. The worst-case abutment scour also occurred at the 500-year discharge. Pier scour ranged from 9.1 to 10.2. The worst-case pier scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  3. Level II scour analysis for Bridge 49 (WODSTH00990049) on Town Highway 99, crossing Gulf Brook, Woodstock, Vermont

    USGS Publications Warehouse

    Olson, Scott A.; Hammond, Robert E.

    1996-01-01

    Contraction scour for all modelled flows ranged from 0.0 to 0.9 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour at the left abutment ranged from 3.1 to 10.3 ft. with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 6.4 to 10.4 ft. with the worst-case occurring at the 100-year discharge.Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  4. Level II scour analysis for Bridge 26 (JAMATH00010026) on Town Highway 1, crossing Ball Mountain Brook, Jamaica, Vermont

    USGS Publications Warehouse

    Burns, Ronda L.; Medalie, Laura

    1997-01-01

    Contraction scour for the modelled flows ranged from 1.0 to 2.7 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour ranged from 8.4 to 17.6 ft. The worst-case abutment scour for the right abutment occurred at the incipient-overtopping discharge. For the left abutment, the worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  5. Level II scour analysis for Bridge 37 (TOWNTH00290037) on Town Highway 29, crossing Mill Brook, Townshend, Vermont

    USGS Publications Warehouse

    Burns, R.L.; Medalie, Laura

    1998-01-01

    Contraction scour for all modelled flows ranged from 0.0 to 2.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 6.7 to 8.7 ft. The worst-case left abutment scour occurred at the incipient roadway-overtopping discharge. Right abutment scour ranged from 7.8 to 9.5 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and Davis, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  6. A semi-quantitative World Health Organization grading scheme evaluating worst tumor differentiation predicts disease-free survival in oral squamous carcinoma patients.

    PubMed

    Jain, Dhruv; Tikku, Gargi; Bhadana, Pallavi; Dravid, Chandrashekhar; Grover, Rajesh Kumar

    2017-08-01

    We investigated World Health Organization (WHO) grading and pattern of invasion based histological schemes as independent predictors of disease-free survival, in oral squamous carcinoma patients. Tumor resection slides of eighty-seven oral squamous carcinoma patients [pTNM: I&II/III&IV-32/55] were evaluated. Besides examining various patterns of invasion, invasive front grade, predominant and worst (highest) WHO grade were recorded. For worst WHO grading, poor-undifferentiated component was estimated semi-quantitatively at advancing tumor edge (invasive growth front) in histology sections. Tumor recurrence was observed in 31 (35.6%) cases. The 2-year disease-free survival was 47% [Median: 656; follow-up: 14-1450] days. Using receiver operating characteristic curves, we defined poor-undifferentiated component exceeding 5% of tumor as the cutoff to assign an oral squamous carcinoma as grade-3, when following worst WHO grading. Kaplan-Meier curves for disease-free survival revealed prognostic association with nodal involvement, tumor size, worst WHO grading; most common pattern of invasion and invasive pattern grading score (sum of two most predominant patterns of invasion). In further multivariate analysis, tumor size (>2.5cm) and worst WHO grading (grade-3 tumors) independently predicted reduced disease-free survival [HR, 2.85; P=0.028 and HR, 3.37; P=0.031 respectively]. The inter-observer agreement was moderate for observers who semi-quantitatively estimated percentage of poor-undifferentiated morphology in oral squamous carcinomas. Our results support the value of semi-quantitative method to assign tumors as grade-3 with worst WHO grading for predicting reduced disease-free survival. Despite limitations, of the various histological tumor stratification schemes, WHO grading holds adjunctive value for its prognostic role, ease and universal familiarity. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Isolator fragmentation and explosive initiation tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, Peter; Rae, Philip John; Foley, Timothy J.

    2016-09-19

    Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without a barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX 9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimatesmore » demonstrate that even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less

  8. Isolator fragmentation and explosive initiation tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, Peter; Rae, Philip John; Foley, Timothy J.

    2015-09-30

    Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimates demonstrate thatmore » even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less

  9. Europe's Austerity Measures Take Their Toll on Academe

    ERIC Educational Resources Information Center

    Labi, Aisha

    2012-01-01

    When the global financial crisis hit in 2008, it looked at first as if many European universities were going to escape the worst. Higher education has long been considered a public right and a taxpayer-financed obligation, and there was optimism that universities, which government leaders hail as drivers of economic growth, would emerge relatively…

  10. Robust attitude control design for spacecraft under assigned velocity and control constraints.

    PubMed

    Hu, Qinglei; Li, Bo; Zhang, Youmin

    2013-07-01

    A novel robust nonlinear control design under the constraints of assigned velocity and actuator torque is investigated for attitude stabilization of a rigid spacecraft. More specifically, a nonlinear feedback control is firstly developed by explicitly taking into account the constraints on individual angular velocity components as well as external disturbances. Considering further the actuator misalignments and magnitude deviation, a modified robust least-squares based control allocator is employed to deal with the problem of distributing the previously designed three-axis moments over the available actuators, in which the focus of this control allocation is to find the optimal control vector of actuators by minimizing the worst-case residual error using programming algorithms. The attitude control performance using the controller structure is evaluated through a numerical example. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Sustainability of fisheries through marine reserves: a robust modeling analysis.

    PubMed

    Doyen, L; Béné, C

    2003-09-01

    Among the many factors that contribute to overexploitation of marine fisheries, the role played by uncertainty is important. This uncertainty includes both the scientific uncertainties related to the resource dynamics or assessments and the uncontrollability of catches. Some recent works advocate for the use of marine reserves as a central element of future stock management. In the present paper, we study the influence of protected areas upon fisheries sustainability through a simple dynamic model integrating non-stochastic harvesting uncertainty and a constraint of safe minimum biomass level. Using the mathematical concept of invariance kernel in a robust and worst-case context, we examine through a formal modeling analysis how marine reserves might guarantee viable fisheries. We also show how sustainability requirement is not necessarily conflicting with optimization of catches. Numerical simulations are provided to illustrate the main findings.

  12. Managing risk in a challenging financial environment.

    PubMed

    Kaufman, Kenneth

    2008-08-01

    Five strategies can help hospital financial leaders balance their organizations' financial and risk positions: Understand the hospital's financial condition; Determine the desired level of risk; Consider total risk; Use a portfolio approach; Explore best-case/worst-case scenarios to measure risk.

  13. Including robustness in multi-criteria optimization for intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David

    2012-02-01

    We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for each Pareto optimal plan takes less than 5 min on a standard computer, making a computationally friendly interface possible to the planner. In conclusion, the uncertainty pertinent to the IMPT procedure can be reduced during treatment planning by optimizing plans that emphasize different treatment objectives, including robustness, and then interactively seeking for a most-preferred one from the solution Pareto surface.

  14. Avoiding verisimilitude when modelling ecological responses to climate change: the influence of weather conditions on trapping efficiency in European badgers (Meles meles).

    PubMed

    Noonan, Michael J; Rahman, M Abidur; Newman, Chris; Buesching, Christina D; Macdonald, David W

    2015-10-01

    The signal for climate change effects can be abstruse; consequently, interpretations of evidence must avoid verisimilitude, or else misattribution of causality could compromise policy decisions. Examining climatic effects on wild animal population dynamics requires ability to trap, observe or photograph and to recapture study individuals consistently. In this regard, we use 19 years of data (1994-2012), detailing the life histories on 1179 individual European badgers over 3288 (re-) trapping events, to test whether trapping efficiency was associated with season, weather variables (both contemporaneous and time lagged), body-condition index (BCI) and trapping efficiency (TE). PCA factor loadings demonstrated that TE was affected significantly by temperature and precipitation, as well as time lags in these variables. From multi-model inference, BCI was the principal driver of TE, where badgers in good condition were less likely to be trapped. Our analyses exposed that this was enacted mechanistically via weather variables driving BCI, affecting TE. Notably, the very conditions that militated for poor trapping success have been associated with actual survival and population abundance benefits in badgers. Using these findings to parameterize simulations, projecting best-/worst-case scenario weather conditions and BCI resulted in 8.6% ± 4.9 SD difference in seasonal TE, leading to a potential 55.0% population abundance under-estimation under the worst-case scenario; 38.6% over-estimation under the best case. Interestingly, simulations revealed that while any single trapping session might prove misrepresentative of the true population abundance, due to weather effects, prolonging capture-mark-recapture studies under sub-optimal conditions decreased the accuracy of population estimates significantly. We also use these projection scenarios to explore how weather could impact government-led trapping of badgers in the UK, in relation to TB management. We conclude that population monitoring must be calibrated against the likelihood that weather conditions could be altering trap success directly, and therefore biasing model design. © 2015 John Wiley & Sons Ltd.

  15. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  16. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  17. A Fully Coupled Multi-Rigid-Body Fuel Slosh Dynamics Model Applied to the Triana Stack

    NASA Technical Reports Server (NTRS)

    London, K. W.

    2001-01-01

    A somewhat general multibody model is presented that accounts for energy dissipation associated with fuel slosh and which unifies some of the existing more specialized representations. This model is used to predict the mutation growth time constant for the Triana Spacecraft, or Stack, consisting of the Triana Observatory mated with the Gyroscopic Upper Stage of GUS (includes the solid rocket motor, SRM, booster). At the nominal spin rate of 60 rpm and with 145 kg of hydrazine propellant on board, a time constant of 116 s is predicted for worst case sloshing of a spherical slug model compared to 1,681 s (nominal), 1,043 s (worst case) for sloshing of a three degree of freedom pendulum model.

  18. Evaluating Methods of Updating Training Data in Long-Term Genomewide Selection

    PubMed Central

    Neyhart, Jeffrey L.; Tiede, Tyler; Lorenz, Aaron J.; Smith, Kevin P.

    2017-01-01

    Genomewide selection is hailed for its ability to facilitate greater genetic gains per unit time. Over breeding cycles, the requisite linkage disequilibrium (LD) between quantitative trait loci and markers is expected to change as a result of recombination, selection, and drift, leading to a decay in prediction accuracy. Previous research has identified the need to update the training population using data that may capture new LD generated over breeding cycles; however, optimal methods of updating have not been explored. In a barley (Hordeum vulgare L.) breeding simulation experiment, we examined prediction accuracy and response to selection when updating the training population each cycle with the best predicted lines, the worst predicted lines, both the best and worst predicted lines, random lines, criterion-selected lines, or no lines. In the short term, we found that updating with the best predicted lines or the best and worst predicted lines resulted in high prediction accuracy and genetic gain, but in the long term, all methods (besides not updating) performed similarly. We also examined the impact of including all data in the training population or only the most recent data. Though patterns among update methods were similar, using a smaller but more recent training population provided a slight advantage in prediction accuracy and genetic gain. In an actual breeding program, a breeder might desire to gather phenotypic data on lines predicted to be the best, perhaps to evaluate possible cultivars. Therefore, our results suggest that an optimal method of updating the training population is also very practical. PMID:28315831

  19. The price of conserving avian phylogenetic diversity: a global prioritization approach

    PubMed Central

    Nunes, Laura A.; Turvey, Samuel T.; Rosindell, James

    2015-01-01

    The combination of rapid biodiversity loss and limited funds available for conservation represents a major global concern. While there are many approaches for conservation prioritization, few are framed as financial optimization problems. We use recently published avian data to conduct a global analysis of the financial resources required to conserve different quantities of phylogenetic diversity (PD). We introduce a new prioritization metric (ADEPD) that After Downlisting a species gives the Expected Phylogenetic Diversity at some future time. Unlike other metrics, ADEPD considers the benefits to future PD associated with downlisting a species (e.g. moving from Endangered to Vulnerable in the International Union for Conservation of Nature Red List). Combining ADEPD scores with data on the financial cost of downlisting different species provides a cost–benefit prioritization approach for conservation. We find that under worst-case spending $3915 can save 1 year of PD, while under optimal spending $1 can preserve over 16.7 years of PD. We find that current conservation spending patterns are only expected to preserve one quarter of the PD that optimal spending could achieve with the same total budget. Maximizing PD is only one approach within the wider goal of biodiversity conservation, but our analysis highlights more generally the danger involved in uninformed spending of limited resources. PMID:25561665

  20. The price of conserving avian phylogenetic diversity: a global prioritization approach.

    PubMed

    Nunes, Laura A; Turvey, Samuel T; Rosindell, James

    2015-02-19

    The combination of rapid biodiversity loss and limited funds available for conservation represents a major global concern. While there are many approaches for conservation prioritization, few are framed as financial optimization problems. We use recently published avian data to conduct a global analysis of the financial resources required to conserve different quantities of phylogenetic diversity (PD). We introduce a new prioritization metric (ADEPD) that After Downlisting a species gives the Expected Phylogenetic Diversity at some future time. Unlike other metrics, ADEPD considers the benefits to future PD associated with downlisting a species (e.g. moving from Endangered to Vulnerable in the International Union for Conservation of Nature Red List). Combining ADEPD scores with data on the financial cost of downlisting different species provides a cost-benefit prioritization approach for conservation. We find that under worst-case spending $3915 can save 1 year of PD, while under optimal spending $1 can preserve over 16.7 years of PD. We find that current conservation spending patterns are only expected to preserve one quarter of the PD that optimal spending could achieve with the same total budget. Maximizing PD is only one approach within the wider goal of biodiversity conservation, but our analysis highlights more generally the danger involved in uninformed spending of limited resources.

  1. Reactive power planning under high penetration of wind energy using Benders decomposition

    DOE PAGES

    Xu, Yan; Wei, Yanli; Fang, Xin; ...

    2015-11-05

    This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less

  2. Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny

    NASA Astrophysics Data System (ADS)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.

  3. Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety

    NASA Astrophysics Data System (ADS)

    Mikula, J. F. Kip

    2005-12-01

    This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.

  4. Control-oriented modelization of a satellite with large flexible appendages and use of worst-case analysis to verify robustness to model uncertainties of attitude control

    NASA Astrophysics Data System (ADS)

    Gasbarri, Paolo; Monti, Riccardo; Campolo, Giovanni; Toglia, Chiara

    2012-12-01

    The design of large space structures (LSS) requires the use of design and analysis tools that include different disciplines. For such a kind of spacecrafts it is in fact mandatory that mechanical design and guidance navigation and control (GNC) design are developed within a common framework. One of the key-points in the development of LSS is related to the dynamic phenomena. These phenomena usually lead to two different interpretations. The former one is related to the overall motion of the spacecraft, i.e., the motion of the centre of gravity and motion around the centre of gravity. The latter one is related to the local motion of the elastic elements that leads to oscillations. These oscillations have in turn a disturbing effect on the motion of the spacecraft. From an engineering perspective, the structural model of flexible spacecrafts is generally obtained via FEM involving thousands of degrees of freedom (DOFs). Many of them are not significant from the attitude control point of view. One of the procedures to reduce the structural DOFs is tied to the modal decomposition technique. In the present paper a technique to develop a control-oriented structural model will be proposed. Starting from a detailed FE model of the spacecraft and using a special modal condensation approach, a continuous model is defined. With this transformation the number of DOFs necessary to study the coupled elastic/rigid dynamic is reduced. The final dynamic model will be suitable for the control design implementation. In order to properly design a satellite controller, it is important to recall that the characteristic parameters of the satellite are uncertain. The effect that uncertainties have on control performance must be investigated. A possible solution is that, after the attitude controller is designed on the nominal model, a Verification and Validation (V&V) process is performed to guarantee a correct functionality under a large number of scenarios. The V&V process can be very lengthy and expensive: difficulty and cost do increase because of the overall system dimension that depends on the number of uncertainties. Uncertain parameters have to be parametrically investigated to determine robust performance of the control laws via gridding approaches. In particular in this paper we propose to consider two methods: (i) a conventional Monte Carlo analysis, and (ii) a worst-case analysis, i.e., an optimization process to find an estimation of the true worst-case behaviour. Both techniques allow to verify that the design is robust enough to meet the system performance specification in case of uncertainties.

  5. Estimated cost of universal public coverage of prescription drugs in Canada

    PubMed Central

    Morgan, Steven G.; Law, Michael; Daw, Jamie R.; Abraham, Liza; Martin, Danielle

    2015-01-01

    Background: With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. Methods: We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Results: Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. Interpretation: The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. PMID:25780047

  6. Estimated cost of universal public coverage of prescription drugs in Canada.

    PubMed

    Morgan, Steven G; Law, Michael; Daw, Jamie R; Abraham, Liza; Martin, Danielle

    2015-04-21

    With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. © 2015 Canadian Medical Association or its licensors.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, Henry

    This research was mostly concerned with asymmetric vertical displacement event (AVDE) disruptions, which are the worst case scenario for producing a large asymmetric wall force. This is potentially a serious problem in ITER.

  8. Multiple usage of the CD PLUS/UNIX system: performance in practice.

    PubMed Central

    Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A

    1995-01-01

    In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable. PMID:8547902

  9. Multiple usage of the CD PLUS/UNIX system: performance in practice.

    PubMed

    Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A

    1995-10-01

    In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable.

  10. 'Worst case' methodology for the initial assessment of societal risk from proposed major accident installations.

    PubMed

    Carter, D A; Hirst, I L

    2000-01-07

    This paper considers the application of one of the weighted risk indicators used by the Major Hazards Assessment Unit (MHAU) of the Health and Safety Executive (HSE) in formulating advice to local planning authorities on the siting of new major accident hazard installations. In such cases the primary consideration is to ensure that the proposed installation would not be incompatible with existing developments in the vicinity, as identified by the categorisation of the existing developments and the estimation of individual risk values at those developments. In addition a simple methodology, described here, based on MHAU's "Risk Integral" and a single "worst case" even analysis, is used to enable the societal risk aspects of the hazardous installation to be considered at an early stage of the proposal, and to determine the degree of analysis that will be necessary to enable HSE to give appropriate advice.

  11. 40 CFR 90.119 - Certification procedure-testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must select the duty cycle that will result in worst-case emission results for certification. For any... facility, in which case instrumentation and equipment specified by the Administrator must be made available... manufacturers may not use any equipment, instruments, or tools to identify malfunctioning, maladjusted, or...

  12. TH-CD-209-10: Scanning Proton Arc Therapy (SPArc) - The First Robust and Delivery-Efficient Spot Scanning Proton Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X; Li, X; Zhang, J

    Purpose: To develop a delivery-efficient proton spot-scanning arc therapy technique with robust plan quality. Methods: We developed a Scanning Proton Arc(SPArc) optimization algorithm integrated with (1)Control point re-sampling by splitting control point into adjacent sub-control points; (2)Energy layer re-distribution by assigning the original energy layers to the new sub-control points; (3)Energy layer filtration by deleting low MU weighting energy layers; (4)Energy layer re-sampling by sampling additional layers to ensure the optimal solution. A bilateral head and neck oropharynx case and a non-mobile lung target case were tested. Plan quality and total estimated delivery time were compared to original robust optimizedmore » multi-field step-and-shoot arc plan without SPArc optimization (Arcmulti-field) and standard robust optimized Intensity Modulated Proton Therapy(IMPT) plans. Dose-Volume-Histograms (DVH) of target and Organ-at-Risks (OARs) were analyzed along with all worst case scenarios. Total delivery time was calculated based on the assumption of a 360 degree gantry room with 1 RPM rotation speed, 2ms spot switching time, beam current 1nA, minimum spot weighting 0.01 MU, energy-layer-switching-time (ELST) from 0.5 to 4s. Results: Compared to IMPT, SPArc delivered less integral dose(−14% lung and −8% oropharynx). For lung case, SPArc reduced 60% of skin max dose, 35% of rib max dose and 15% of lung mean dose. Conformity Index is improved from 7.6(IMPT) to 4.0(SPArc). Compared to Arcmulti-field, SPArc reduced number of energy layers by 61%(276 layers in lung) and 80%(1008 layers in oropharynx) while kept the same robust plan quality. With ELST from 0.5s to 4s, it reduced 55%–60% of Arcmulti-field delivery time for the lung case and 56%–67% for the oropharynx case. Conclusion: SPArc is the first robust and delivery-efficient proton spot-scanning arc therapy technique which could be implemented in routine clinic. For modern proton machine with ELST close to 0.5s, SPArc would be a popular treatment option for both single and multi-room center.« less

  13. Tractable Pareto Optimization of Temporal Preferences

    NASA Technical Reports Server (NTRS)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  14. Level II scour analysis for Bridge 81 (MARSUS00020081) on U.S. Highway 2, crossing the Winooski River, Marshfield, Vermont

    USGS Publications Warehouse

    Ivanoff, Michael A.

    1997-01-01

    Contraction scour for all modelled flows ranged from 2.1 to 4.2 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 14.3 to 14.4 ft. The worst-case left abutment scour occurred at the incipient roadwayovertopping and 500-year discharge. Right abutment scour ranged from 15.3 to 18.5 ft. The worst-case right abutment scour occurred at the 100-year and the incipient roadwayovertopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) give “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  15. In situ LTE exposure of the general public: Characterization and extrapolation.

    PubMed

    Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc

    2012-09-01

    In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.

  16. MP3 player listening sound pressure levels among 10 to 17 year old students.

    PubMed

    Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M

    2011-11-01

    Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤  75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.

  17. Level II scour analysis for Bridge 7 (CHARTH00010007) on Town Highway 1, crossing Mad Brook, Charleston, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.; Weber, Matthew A.

    1997-01-01

    Contraction scour for all modelled flows ranged from 0.0 to 0.3 ft. The worst-case contraction scour occurred at the incipient overtopping discharge, which was less than the 100-year discharge. Abutment scour ranged from 6.2 to 9.4 ft. The worst-case abutment scour for the right abutment was 9.4 feet at the 100-year discharge. The worst-case abutment scour for the left abutment was 8.6 feet at the incipient overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  18. Level II scour analysis for Bridge 16, (NEWBTH00500016) on Town Highway 50, crossing Halls Brook, Newbury, Vermont

    USGS Publications Warehouse

    Burns, Ronda L.; Degnan, James R.

    1997-01-01

    Contraction scour for all modelled flows ranged from 2.6 to 4.6 ft. The worst-case contraction scour occurred at the incipient roadway-overtopping discharge. The left abutment scour ranged from 11.6 to 12.1 ft. The worst-case left abutment scour occurred at the incipient road-overtopping discharge. The right abutment scour ranged from 13.6 to 17.9 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  19. Adhesive strength of total knee endoprostheses to bone cement - analysis of metallic and ceramic femoral components under worst-case conditions.

    PubMed

    Bergschmidt, Philipp; Dammer, Rebecca; Zietz, Carmen; Finze, Susanne; Mittelmeier, Wolfram; Bader, Rainer

    2016-06-01

    Evaluation of the adhesive strength of femoral components to the bone cement is a relevant parameter for predicting implant safety. In the present experimental study, three types of cemented femoral components (metallic, ceramic and silica/silane-layered ceramic) of the bicondylar Multigen Plus knee system, implanted on composite femora were analysed. A pull-off test with the femoral components was performed after different load and several cementing conditions (four groups and n=3 components of each metallic, ceramic and silica/silane-layered ceramic in each group). Pull-off forces were comparable for the metallic and the silica/silane-layered ceramic femoral components (mean 4769 N and 4298 N) under standard test condition, whereas uncoated ceramic femoral components showed reduced pull-off forces (mean 2322 N). Loading under worst-case conditions led to decreased adhesive strength by loosening of the interface implant and bone cement using uncoated metallic and ceramic femoral components, respectively. Silica/silane-coated ceramic components were stably fixed even under worst-case conditions. Loading under high flexion angles can induce interfacial tensile stress, which could promote early implant loosening. In conclusion, a silica/silane-coating layer on the femoral component increased their adhesive strength to bone cement. Thicker cement mantles (>2 mm) reduce adhesive strength of the femoral component and can increase the risk of cement break-off.

  20. Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome.

    PubMed

    Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack

    2016-05-10

    Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an 'overall' or 'worst' GS in biopsies series should be used. Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Using both 'worst' and 'overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the 'worst' grade is a valid prognostic measure.

  1. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Element Load Data Processor (ELDAP) Users Manual

    NASA Technical Reports Server (NTRS)

    Ramsey, John K., Jr.; Ramsey, John K., Sr.

    2015-01-01

    Often, the shear and tensile forces and moments are extracted from finite element analyses to be used in off-line calculations for evaluating the integrity of structural connections involving bolts, rivets, and welds. Usually the maximum forces and moments are desired for use in the calculations. In situations where there are numerous structural connections of interest for numerous load cases, the effort in finding the true maximum force and/or moment combinations among all fasteners and welds and load cases becomes difficult. The Element Load Data Processor (ELDAP) software described herein makes this effort manageable. This software eliminates the possibility of overlooking the worst-case forces and moments that could result in erroneous positive margins of safety and/or selecting inconsistent combinations of forces and moments resulting in false negative margins of safety. In addition to forces and moments, any scalar quantity output in a PATRAN report file may be evaluated with this software. This software was originally written to fill an urgent need during the structural analysis of the Ares I-X Interstage segment. As such, this software was coded in a straightforward manner with no effort made to optimize or minimize code or to develop a graphical user interface.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hird, Amanda; Chow, Edward; Zhang Liying

    Purpose: To determine the incidence of pain flare following radiotherapy (RT) for painful bone metastases. Materials and Methods: Patients with bone metastases treated with RT were eligible. Worst pain scores and analgesic consumption were collected before, daily during, and for 10 days after treatment. Pain flare was defined as a 2-point increase in the worst pain score (0-10) compared to baseline with no decrease in analgesic intake, or a 25% increase in analgesic intake with no decrease in worst pain score. Pain flare was distinguished from progression of pain by requiring the worst pain score and analgesic intake return tomore » baseline levels after the increase/flare (within the 10-day follow-up period). Results: A total of 111 patients from three cancer centers were evaluable. There were 50 male and 61 female patients with a median age of 62 years (range, 40-89 years). The primary cancers were mainly breast, lung, and prostate. Most patients received a single 8 Gy (64%) or 20 Gy in five fractions (25%). The overall pain flare incidence was 44/111 (40%) during RT and within 10 days following the completion of RT. Patients treated with a single 8 Gy reported a pain flare incidence of 39% (27/70) and, with multiple fractions, 41% (17/41). Conclusion: More than one third of the enrolled patients experienced a pain flare. Identifying at-risk individuals and managing potential pain flares is crucial to achieve an optimal level of care.« less

  4. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    DOE PAGES

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less

  5. Charging and discharging characteristics of dielectric materials exposed to low- and mid-energy electrons

    NASA Technical Reports Server (NTRS)

    Coakley, P.; Kitterer, B.; Treadaway, M.

    1982-01-01

    Charging and discharging characteristics of dielectric samples exposed to 1-25 keV and 25-100 keV electrons in a laboratory environment are reported. The materials examined comprised OSR, Mylar, Kapton, perforated Kapton, and Alphaquartz, serving as models for materials employed on spacecraft in geosynchronous orbit. The tests were performed in a vacuum chamber with electron guns whose beams were rastered over the entire surface of the planar samples. The specimens were examined in low-impedance-grounded, high-impedance-grounded, and isolated configurations. The worst-case and average peak discharge currents were observed to be independent of the incident electron energy, the time-dependent changes in the worst case discharge peak current were independent of the energy, and predischarge surface potentials are negligibly dependent on incident monoenergetic electrons.

  6. Worst-case space radiation environments for geocentric missions

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.; Seltzer, S. M.

    1976-01-01

    Worst-case possible annual radiation fluences of energetic charged particles in the terrestrial space environment, and the resultant depth-dose distributions in aluminum, were calculated in order to establish absolute upper limits to the radiation exposure of spacecraft in geocentric orbits. The results are a concise set of data intended to aid in the determination of the feasibility of a particular mission. The data may further serve as guidelines in the evaluation of standard spacecraft components. Calculations were performed for each significant particle species populating or visiting the magnetosphere, on the basis of volume occupied by or accessible to the respective species. Thus, magnetospheric space was divided into five distinct regions using the magnetic shell parameter L, which gives the approximate geocentric distance (in earth radii) of a field line's equatorial intersect.

  7. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    NASA Astrophysics Data System (ADS)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  8. Cardiac imaging with multi-sector data acquisition in volumetric CT: variation of effective temporal resolution and its potential clinical consequences

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Taha, Basel H.; Vass, Melissa L.; Seamans, John L.; Okerlund, Darin R.

    2009-02-01

    With increasing longitudinal detector dimension available in diagnostic volumetric CT, step-and-shoot scan is becoming popular for cardiac imaging. In comparison to helical scan, step-and-shoot scan decouples patient table movement from cardiac gating/triggering, which facilitates the cardiac imaging via multi-sector data acquisition, as well as the administration of inter-cycle heart beat variation (arrhythmia) and radiation dose efficiency. Ideally, a multi-sector data acquisition can improve temporal resolution at a factor the same as the number of sectors (best scenario). In reality, however, the effective temporal resolution is jointly determined by gantry rotation speed and patient heart beat rate, which may significantly lower than the ideal or no improvement (worst scenario). Hence, it is clinically relevant to investigate the behavior of effective temporal resolution in cardiac imaging with multi-sector data acquisition. In this study, a 5-second cine scan of a porcine heart, which cascades 6 porcine cardiac cycles, is acquired. In addition to theoretical analysis and motion phantom study, the clinical consequences due to the effective temporal resolution variation are evaluated qualitative or quantitatively. By employing a 2-sector image reconstruction strategy, a total of 15 (the permutation of P(6, 2)) cases between the best and worst scenarios are studied, providing informative guidance for the design and optimization of CT cardiac imaging in volumetric CT with multi-sector data acquisition.

  9. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  10. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  11. Integrated layout based Monte-Carlo simulation for design arc optimization

    NASA Astrophysics Data System (ADS)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  12. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  13. Estimated Probabililty of Chest Injury During an International Space Station Mission

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Milo, Eric A.; Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G., Jr.

    2013-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to spaceflight mission planners and medical system designers when assessing risks and optimizing medical systems. The IMM project maintains a database of medical conditions that could occur during a spaceflight. The IMM project is in the process of assigning an incidence rate, the associated functional impairment, and a best and a worst case end state for each condition. The purpose of this work was to develop the IMM Chest Injury Module (CIM). The CIM calculates the incidence rate of chest injury per person-year of spaceflight on the International Space Station (ISS). The CIM was built so that the probability of chest injury during one year on ISS could be predicted. These results will be incorporated into the IMM Chest Injury Clinical Finding Form and used within the parent IMM model.

  14. Estimated Probability of Traumatic Abdominal Injury During an International Space Station Mission

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Brooker, John E.; Weavr, Aaron S.; Myers, Jerry G., Jr.; McRae, Michael P.

    2013-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to spaceflight mission planners and medical system designers when assessing risks and optimizing medical systems. The IMM project maintains a database of medical conditions that could occur during a spaceflight. The IMM project is in the process of assigning an incidence rate, the associated functional impairment, and a best and a worst case end state for each condition. The purpose of this work was to develop the IMM Abdominal Injury Module (AIM). The AIM calculates an incidence rate of traumatic abdominal injury per person-year of spaceflight on the International Space Station (ISS). The AIM was built so that the probability of traumatic abdominal injury during one year on ISS could be predicted. This result will be incorporated into the IMM Abdominal Injury Clinical Finding Form and used within the parent IMM model.

  15. Automated annotation of functional imaging experiments via multi-label classification

    PubMed Central

    Turner, Matthew D.; Chakrabarti, Chayan; Jones, Thomas B.; Xu, Jiawei F.; Fox, Peter T.; Luger, George F.; Laird, Angela R.; Turner, Jessica A.

    2013-01-01

    Identifying the experimental methods in human neuroimaging papers is important for grouping meaningfully similar experiments for meta-analyses. Currently, this can only be done by human readers. We present the performance of common machine learning (text mining) methods applied to the problem of automatically classifying or labeling this literature. Labeling terms are from the Cognitive Paradigm Ontology (CogPO), the text corpora are abstracts of published functional neuroimaging papers, and the methods use the performance of a human expert as training data. We aim to replicate the expert's annotation of multiple labels per abstract identifying the experimental stimuli, cognitive paradigms, response types, and other relevant dimensions of the experiments. We use several standard machine learning methods: naive Bayes (NB), k-nearest neighbor, and support vector machines (specifically SMO or sequential minimal optimization). Exact match performance ranged from only 15% in the worst cases to 78% in the best cases. NB methods combined with binary relevance transformations performed strongly and were robust to overfitting. This collection of results demonstrates what can be achieved with off-the-shelf software components and little to no pre-processing of raw text. PMID:24409112

  16. Optimal control of an invasive species using a reaction-diffusion model and linear programming

    USGS Publications Warehouse

    Bonneau, Mathieu; Johnson, Fred A.; Smith, Brian J.; Romagosa, Christina M.; Martin, Julien; Mazzotti, Frank J.

    2017-01-01

    Managing an invasive species is particularly challenging as little is generally known about the species’ biological characteristics in its new habitat. In practice, removal of individuals often starts before the species is studied to provide the information that will later improve control. Therefore, the locations and the amount of control have to be determined in the face of great uncertainty about the species characteristics and with a limited amount of resources. We propose framing spatial control as a linear programming optimization problem. This formulation, paired with a discrete reaction-diffusion model, permits calculation of an optimal control strategy that minimizes the remaining number of invaders for a fixed cost or that minimizes the control cost for containment or protecting specific areas from invasion. We propose computing the optimal strategy for a range of possible model parameters, representing current uncertainty on the possible invasion scenarios. Then, a best strategy can be identified depending on the risk attitude of the decision-maker. We use this framework to study the spatial control of the Argentine black and white tegus (Salvator merianae) in South Florida. There is uncertainty about tegu demography and we considered several combinations of model parameters, exhibiting various dynamics of invasion. For a fixed one-year budget, we show that the risk-averse strategy, which optimizes the worst-case scenario of tegus’ dynamics, and the risk-neutral strategy, which optimizes the expected scenario, both concentrated control close to the point of introduction. A risk-seeking strategy, which optimizes the best-case scenario, focuses more on models where eradication of the species in a cell is possible and consists of spreading control as much as possible. For the establishment of a containment area, assuming an exponential growth we show that with current control methods it might not be possible to implement such a strategy for some of the models that we considered. Including different possible models allows an examination of how the strategy is expected to perform in different scenarios. Then, a strategy that accounts for the risk attitude of the decision-maker can be designed.

  17. Hepatitis Aand E Co-Infection with Worst Outcome.

    PubMed

    Saeed, Anjum; Cheema, Huma Arshad; Assiri, Asaad

    2016-06-01

    Infections are still a major problem in the developing countries like Pakistan because of poor sewage disposal and economic restraints. Acute viral hepatitis like Aand E are not uncommon in pediatric age group because of unhygienic food handling and poor sewage disposal, but majority recovers well without any complications. Co-infections are rare occurrences and physicians need to be well aware while managing such conditions to avoid worst outcome. Co-infection with hepatitis Aand E is reported occasionally in the literature, however, other concurrent infections such as hepatitis A with Salmonellaand hepatotropic viruses like viral hepatitis B and C are present in the literature. Co-infections should be kept in consideration when someone presents with atypical symptoms or unusual disease course like this presented case. We report here a girl child who had acute hepatitis A and E concurrent infections and presented with hepatic encephalopathy and had worst outcome, despite all the supportive measures being taken.

  18. Implementation of School Health Promotion: Consequences for Professional Assistance

    ERIC Educational Resources Information Center

    Boot, N. M. W. M.; de Vries, N. K.

    2012-01-01

    Purpose: This case study aimed to examine the factors influencing the implementation of health promotion (HP) policies and programs in secondary schools and the consequences for professional assistance. Design/methodology/approach: Group interviews were held in two schools that represented the best and worst case of implementation of a health…

  19. Compression in the Superintendent Ranks

    ERIC Educational Resources Information Center

    Saron, Bradford G.; Birchbauer, Louis J.

    2011-01-01

    Sadly, the fiscal condition of school systems now not only is troublesome, but in some cases has surpassed all expectations for the worst-case scenario. Among the states, one common response is to drop funding for public education to inadequate levels, leading to permanent program cuts, school closures, staff layoffs, district dissolutions and…

  20. Optimal Stabilization of Social Welfare under Small Variation of Operating Condition with Bifurcation Analysis

    NASA Astrophysics Data System (ADS)

    Chanda, Sandip; De, Abhinandan

    2016-12-01

    A social welfare optimization technique has been proposed in this paper with a developed state space based model and bifurcation analysis to offer substantial stability margin even in most inadvertent states of power system networks. The restoration of the power market dynamic price equilibrium has been negotiated in this paper, by forming Jacobian of the sensitivity matrix to regulate the state variables for the standardization of the quality of solution in worst possible contingencies of the network and even with co-option of intermittent renewable energy sources. The model has been tested in IEEE 30 bus system and illustrious particle swarm optimization has assisted the fusion of the proposed model and methodology.

  1. Design framework for entanglement-distribution switching networks

    NASA Astrophysics Data System (ADS)

    Drost, Robert J.; Brodsky, Michael

    2016-09-01

    The distribution of quantum entanglement appears to be an important component of applications of quantum communications and networks. The ability to centralize the sourcing of entanglement in a quantum network can provide for improved efficiency and enable a variety of network structures. A necessary feature of an entanglement-sourcing network node comprising several sources of entangled photons is the ability to reconfigurably route the generated pairs of photons to network neighbors depending on the desired entanglement sharing of the network users at a given time. One approach to such routing is the use of a photonic switching network. The requirements for an entanglement distribution switching network are less restrictive than for typical conventional applications, leading to design freedom that can be leveraged to optimize additional criteria. In this paper, we present a mathematical framework defining the requirements of an entanglement-distribution switching network. We then consider the design of such a switching network using a number of 2 × 2 crossbar switches, addressing the interconnection of these switches and efficient routing algorithms. In particular, we define a worst-case loss metric and consider 6 × 6, 8 × 8, and 10 × 10 network designs that optimize both this metric and the number of crossbar switches composing the network. We pay particular attention to the 10 × 10 network, detailing novel results proving the optimality of the proposed design. These optimized network designs have great potential for use in practical quantum networks, thus advancing the concept of quantum networks toward reality.

  2. Detailed clinicopathological characterization of progressive alopecia areata patients treated with i.v. corticosteroid pulse therapy toward optimization of inclusion criteria.

    PubMed

    Sato, Misato; Amagai, Masayuki; Ohyama, Manabu

    2014-11-01

    The management of progressive alopecia areata (AA) is often challenging. Recently, i.v. corticosteroid pulse therapy has been reported to be effective for acute and severe AA, however, inclusion criteria have not been sufficiently precise, leaving a chance that its efficacy could be further improved by optimizing therapeutic indications. In our attempts to delineate the factors that correlate with favorable outcomes, we minutely evaluated the clinicopathological findings and the prognoses of single-round steroid pulse-treated progressive AA cases with full sets of image and pathology records during the course. Almost complete hair regrowth has been achieved and maintained up to 2 years in five out of seven AA patients with varying degrees of clinical severity. Interestingly, the worst clinical presentation observed during the course correlated with the size of the area where hairs with dystrophic roots were pulled rather than the extent of visible hair loss on the first visit. Dermoscopy detected disease spread but contributed little in assessing prognoses. Dense perifollicular cell infiltration was detected in all cases treated within 4 weeks of onset and those treated later but with excellent response. Importantly, the cases with poor or incomplete hair regrowth were treated 6-8 weeks of onset and showed moderate inflammatory change with high telogen conversion rate. These findings mandate global dermoscopy and hair pull test for judging the treatment indication and suggest that early administration of high-dose corticosteroid, ideally within 4 weeks of onset, enable efficient suppression of active inflammation and maximize the effectiveness of the remedy. © 2014 Japanese Dermatological Association.

  3. Optimal integrated abundances for chemical tagging of extragalactic globular clusters

    NASA Astrophysics Data System (ADS)

    Sakari, Charli M.; Venn, Kim; Shetrone, Matthew; Dotter, Aaron; Mackey, Dougal

    2014-09-01

    High-resolution integrated light (IL) spectroscopy provides detailed abundances of distant globular clusters whose stars cannot be resolved. Abundance comparisons with other systems (e.g. for chemical tagging) require understanding the systematic offsets that can occur between clusters, such as those due to uncertainties in the underlying stellar population. This paper analyses high-resolution IL spectra of the Galactic globular clusters 47 Tuc, M3, M13, NGC 7006, and M15 to (1) quantify potential systematic uncertainties in Fe, Ca, Ti, Ni, Ba, and Eu and (2) identify the most stable abundance ratios that will be useful in future analyses of unresolved targets. When stellar populations are well modelled, uncertainties are ˜0.1-0.2 dex based on sensitivities to the atmospheric parameters alone; in the worst-case scenarios, uncertainties can rise to 0.2-0.4 dex. The [Ca I/Fe I] ratio is identified as the optimal integrated [α/Fe] indicator (with offsets ≲ 0.1 dex), while [Ni I/Fe I] is also extremely stable to within ≲ 0.1 dex. The [Ba II/Eu II] ratios are also stable when the underlying populations are well modelled and may also be useful for chemical tagging.

  4. Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.

    PubMed

    Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z

    2012-07-01

    Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.

  5. 40 CFR 85.2115 - Notification of intent to certify.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...

  6. 40 CFR 85.2115 - Notification of intent to certify.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...

  7. 40 CFR 85.2115 - Notification of intent to certify.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...

  8. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  9. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  10. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  11. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  12. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  13. Adaptive Attitude Control of the Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Muse, Jonathan

    2010-01-01

    An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.

  14. Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome

    PubMed Central

    Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack

    2016-01-01

    Background: Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an ‘overall' or ‘worst' GS in biopsies series should be used. Methods: Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Results: Using both ‘worst' and ‘overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. Conclusions: This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the ‘worst' grade is a valid prognostic measure. PMID:27100731

  15. RMP*Comp

    EPA Pesticide Factsheets

    You can use this free software program to complete the Off-site Consequence Analyses (both worst case scenarios and alternative scenarios) required under the Risk Management Program rule, so that you don't have to do calculations by hand.

  16. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  17. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  18. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  19. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  20. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  1. Calculations of the skyshine gamma-ray dose rates from independent spent fuel storage installations (ISFSI) under worst case accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, J.V. III; Cramer, S.N.; Knight, J.R.

    1980-09-01

    Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less

  2. Most Probable Fire Scenarios in Spacecraft and Extraterrestrial Habitats: Why NASA's Current Test 1 Might Not Always be Conservative

    NASA Technical Reports Server (NTRS)

    Olson, S. L.

    2004-01-01

    NASA's current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.

  3. Most Probable Fire Scenarios in Spacecraft and Extraterrestrial Habitats: Why NASA's Current Test 1 Might Not Always Be Conservative

    NASA Technical Reports Server (NTRS)

    Olson, S. L.

    2004-01-01

    NASA s current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1[1]). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.

  4. LANDSAT-D MSS/TM tuned orbital jitter analysis model LDS900

    NASA Technical Reports Server (NTRS)

    Pollak, T. E.

    1981-01-01

    The final LANDSAT-D orbital dynamic math model (LSD900), comprised of all test validated substructures, was used to evaluate the jitter response of the MSS/TM experiments. A dynamic forced response analysis was performed at both the MSS and TM locations on all structural modes considered (thru 200 Hz). The analysis determined the roll angular response of the MSS/TM experiments to improve excitation generated by component operation. Cross axis and cross experiment responses were also calculated. The excitations were analytically represented by seven and nine term Fourier series approximations, for the MSS and TM experiment respectively, which enabled linear harmonic solution techniques to be applied to response calculations. Single worst case jitter was estimated by variations of the eigenvalue spectrum of model LSD 900. The probability of any worst case mode occurrence was investigated.

  5. [Hygienic optimization of the use of chemical protective means on railway transport].

    PubMed

    Kaptsov, V A; Pankova, V B; Elizarov, B B; Mezentsev, A P; Komleva, E A

    2004-01-01

    The paper presents data characterizing the working conditions of railway workers. It shows that there is the greatest levels of noise and vibration, the burden and intensity of work. The worst working conditions are noted in energy supply, car, locomotive services and track facilities. The working conditions determine a significant industrial risk of railway workers since the prevention of health abnormalities by using chemical protective means is a topical problem. The priority lines of hygienic rationale for optimization the choice and use of chemical protective means for workers exposed to occupational hazards are determined.

  6. An Alaskan Theater Airlift Model.

    DTIC Science & Technology

    1982-02-19

    overt attack on American soil . In any case, such a reaotion represents the worst-case scenario In that theater forces would be denied the advantages of...NNSETNTAFE,SS(l06), USL (100), 7 TNET,THOV,1X(100) REAL A,CHKTIN INTEGER ORIC,DEST,ISCTMP,WXFLG,ALLW,T(RT,ZPTR,ZONE, * FTNFLG.WX,ZONLST(150) DATA ZNSI

  7. 78 FR 49831 - Endangered and Threatened Wildlife and Plants; Proposed Designation of Critical Habitat for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... Service (NPS) for the Florida leafwing and the pine rockland ecosystem, in general. Sea Level Rise... habitat. In the best case scenario, which assumes low sea level rise, high financial resources, proactive... human population. In the worst case scenario, which assumes high sea level rise, low financial resources...

  8. A Different Call to Arms: Women in the Core of the Communications Revolution.

    ERIC Educational Resources Information Center

    Rush, Ramona R.

    A "best case" model for the role of women in the postindustrial communications era predicts positive leadership roles based on the preindustrial work characteristics of cooperation and consensus. A "worst case" model finds women entrepreneurs succumbing to the competitive male ethos and extracting the maximum amount of work…

  9. Homework Interventions for Children with Attention and Learning Problems: Where Is the "Home" in "Homework?"

    ERIC Educational Resources Information Center

    Sheridan, Susan M.

    2009-01-01

    Homework is a reality in the lives of most American school children. At its best, homework is a highly useful and appropriate strategy. At its worst, it can wreak havoc in the lives of many children and families who fail to master behavioral and environmental routines that create conditions and patterns conducive for optimal performance. Thus,…

  10. Parallel Computational Protein Design.

    PubMed

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  11. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  12. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  13. RMP Guidance for Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Offsite consequence analysis (OCA) consists of a worst-case release scenario and alternative release scenarios. OCA is required from facilities with chemicals above threshold quantities. RMP*Comp software can be used to perform calculations described here.

  14. The Integrated Medical Model - Optimizing In-flight Space Medical Systems to Reduce Crew Health Risk and Mission Impacts

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Walton, Marlei; Minard, Charles; Saile, Lynn; Myers, Jerry; Butler, Doug; Lyengar, Sriram; Fitts, Mary; Johnson-Throop, Kathy

    2009-01-01

    The Integrated Medical Model (IMM) is a decision support tool used by medical system planners and designers as they prepare for exploration planning activities of the Constellation program (CxP). IMM provides an evidence-based approach to help optimize the allocation of in-flight medical resources for a specified level of risk within spacecraft operational constraints. Eighty medical conditions and associated resources are represented in IMM. Nine conditions are due to Space Adaptation Syndrome. The IMM helps answer fundamental medical mission planning questions such as What medical conditions can be expected? What type and quantity of medical resources are most likely to be used?", and "What is the probability of crew death or evacuation due to medical events?" For a specified mission and crew profile, the IMM effectively characterizes the sequence of events that could potentially occur should a medical condition happen. The mathematical relationships among mission and crew attributes, medical conditions and incidence data, in-flight medical resources, potential clinical and crew health end states are established to generate end state probabilities. A Monte Carlo computational method is used to determine the probable outcomes and requires up to 25,000 mission trials to reach convergence. For each mission trial, the pharmaceuticals and supplies required to diagnose and treat prevalent medical conditions are tracked and decremented. The uncertainty of patient response to treatment is bounded via a best-case, worst-case, untreated case algorithm. A Crew Health Index (CHI) metric, developed to account for functional impairment due to a medical condition, provides a quantified measure of risk and enables risk comparisons across mission scenarios. The use of historical in-flight medical data, terrestrial surrogate data as appropriate, and space medicine subject matter expertise has enabled the development of a probabilistic, stochastic decision support tool capable of optimizing in-flight medical systems based on crew and mission parameters. This presentation will illustrate how to apply quantitative risk assessment methods to optimize the mass and volume of space-based medical systems for a space flight mission given the level of crew health and mission risk.

  15. Real Time Energy Management Control Strategies for Hybrid Powertrains

    NASA Astrophysics Data System (ADS)

    Zaher, Mohamed Hegazi Mohamed

    In order to improve fuel efficiency and reduce emissions of mobile vehicles, various hybrid power-train concepts have been developed over the years. This thesis focuses on embedded control of hybrid powertrain concepts for mobile vehicle applications. Optimal robust control approach is used to develop a real time energy management strategy for continuous operations. The main idea is to store the normally wasted mechanical regenerative energy in energy storage devices for later usage. The regenerative energy recovery opportunity exists in any condition where the speed of motion is in opposite direction to the applied force or torque. This is the case when the vehicle is braking, decelerating, or the motion is driven by gravitational force, or load driven. There are three main concepts for regernerative energy storing devices in hybrid vehicles: electric, hydraulic, and flywheel. The real time control challenge is to balance the system power demand from the engine and the hybrid storage device, without depleting the energy storage device or stalling the engine in any work cycle, while making optimal use of the energy saving opportunities in a given operational, often repetitive cycle. In the worst case scenario, only engine is used and hybrid system completely disabled. A rule based control is developed and tuned for different work cycles and linked to a gain scheduling algorithm. A gain scheduling algorithm identifies the cycle being performed by the machine and its position via GPS, and maps them to the gains.

  16. Planning Education for Regional Economic Integration: The Case of Paraguay and MERCOSUR.

    ERIC Educational Resources Information Center

    McGinn, Noel

    This paper examines the possible impact of MERCOSUR on Paraguay's economic and educational systems. MERCOSUR is a trade agreement among Argentina, Brazil, Paraguay, and Uruguay, under which terms all import tariffs among the countries will be eliminated by 1994. The countries will enter into a common economic market. The worst-case scenario…

  17. Asteroid Bennu Temperature Maps for OSIRIS-REx Spacecraft and Instrument Thermal Analyses

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.; Emery, Josh; Delbo, Marco

    2014-01-01

    A thermophysical model has been developed to generate asteroid Bennu surface temperature maps for OSIRIS-REx spacecraft and instrument thermal design and analyses at the Critical Design Review (CDR). Two-dimensional temperature maps for worst hot and worst cold cases are used in Thermal Desktop to assure adequate thermal design margins. To minimize the complexity of the Bennu geometry in Thermal Desktop, it is modeled as a sphere instead of the radar shape. The post-CDR updated thermal inertia and a modified approach show that the new surface temperature predictions are more benign. Therefore the CDR Bennu surface temperature predictions are conservative.

  18. A framework for multi-stakeholder decision-making and ...

    EPA Pesticide Factsheets

    We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study

  19. Availability Simulation of AGT Systems

    DOT National Transportation Integrated Search

    1975-02-01

    The report discusses the analytical and simulation procedures that were used to evaluate the effects of failure in a complex dual mode transportation system based on a worst case study-state condition. The computed results are an availability figure ...

  20. Carbon monoxide screen for signalized intersections COSIM, version 3.0 : technical documentation.

    DOT National Transportation Integrated Search

    2008-07-01

    The Illinois Department of Transportation (IDOT) currently uses the computer screening model Illinois : CO Screen for Intersection Modeling (COSIM) to estimate worst-case CO concentrations for proposed roadway : projects affecting signalized intersec...

  1. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  2. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  3. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  4. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  5. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  6. RMP Guidance for Warehouses - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Offsite consequence analysis (OCA) informs government and the public about potential consequences of an accidental toxic or flammable chemical release at your facility, and consists of a worst-case release scenario and alternative release scenarios.

  7. RMP Guidance for Chemical Distributors - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    How to perform the OCA for regulated substances, informing the government and the public about potential consequences of an accidental chemical release at your facility. Includes calculations for worst-case scenario, alternative scenarios, and endpoints.

  8. Gum Disease

    MedlinePlus

    ... damage to the tissue and bone supporting the teeth. In the worst cases, you can lose teeth. In gingivitis, the gums become red and swollen. ... flossing and regular cleanings by a dentist or dental hygienist. Untreated gingivitis can lead to periodontitis. If ...

  9. The Effect of Reaction Control System Thruster Plume Impingement on Orion Service Module Solar Array Power Production

    NASA Technical Reports Server (NTRS)

    Bury, Kristen M.; Kerslake, Thomas W.

    2008-01-01

    NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.

  10. Response of the North American corn belt to climate warming, CO2

    NASA Astrophysics Data System (ADS)

    1983-08-01

    The climate of the North American corn belt was characterized to estimate the effects of climatic change on that agricultural region. Heat and moisture characteristics of the current corn belt were identified and mapped based on a simulated climate for a doubling of atmospheric CO2 concentrations. The result was a map of the projected corn belt corresponding to the simulated climatic change. Such projections were made with and without an allowance for earlier planting dates that could occur under a CO2-induced climatic warming. Because the direct effects of CO2 increases on plants, improvements in farm technology, and plant breeding are not considered, the resulting projections represent an extreme or worst case. The results indicate that even for such a worst case, climatic conditions favoring corn production would not extend very far into Canada. Climatic buffering effects of the Great Lakes would apparently retard northeastward shifts in corn-belt location.

  11. Performance of a normalized energy metric without jammer state information for an FH/MFSK system in worst case partial band jamming

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1985-01-01

    For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.

  12. Centaur Propellant Thermal Conditioning Study

    NASA Technical Reports Server (NTRS)

    Blatt, M. H.; Pleasant, R. L.; Erickson, R. C.

    1976-01-01

    A wicking investigation revealed that passive thermal conditioning was feasible and provided considerable weight advantage over active systems using throttled vent fluid in a Centaur D-1s launch vehicle. Experimental wicking correlations were obtained using empirical revisions to the analytical flow model. Thermal subcoolers were evaluated parametrically as a function of tank pressure and NPSP. Results showed that the RL10 category I engine was the best candidate for boost pump replacement and the option showing the lowest weight penalty employed passively cooled acquisition devices, thermal subcoolers, dry ducts between burns and pumping of subcooler coolant back into the tank. A mixing correlation was identified for sizing the thermodynamic vent system mixer. Worst case mixing requirements were determined by surveying Centaur D-1T, D-1S, IUS, and space tug vehicles. Vent system sizing was based upon worst case requirements. Thermodynamic vent system/mixer weights were determined for each vehicle.

  13. Updated model assessment of pollution at major U. S. airports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamartino, R.J.; Rote, D.M.

    1979-02-01

    The air quality impact of aircraft at and around Los Angeles International Airport (LAX) was simulated for hours of peak aircraft operation and 'worst case' pollutant dispersion conditions by using an updated version of the Argonne Airport Vicinity Air Pollution model; field programs at LAX, O'Hara, and John F. Kennedy International Airports determined the 'worst case' conditions. Maximum carbon monoxide concentrations at LAX were low relative to National Ambient Air Quality Standards; relatively high and widespread hydrocarbon concentrations indicated that aircraft emissions may aggravate oxidant problems near the airport; nitrogen oxide concentrations were close to the levels set in proposedmore » standards. Data on typical time-in-mode for departing and arriving aircraft, the 8/4/77 diurnal variation in airport activity, and carbon monoxide concentration isopleths are given, and the update factors in the model are discussed.« less

  14. Bristol Ridge: A 28-nm $$\\times$$ 86 Performance-Enhanced Microprocessor Through System Power Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundaram, Sriram; Grenat, Aaron; Naffziger, Samuel

    Power management techniques can be effective at extracting more performance and energy efficiency out of mature systems on chip (SoCs). For instance, the peak performance of microprocessors is often limited by worst case technology (Vmax), infrastructure (thermal/electrical), and microprocessor usage assumptions. Performance/watt of microprocessors also typically suffers from guard bands associated with the test and binning processes as well as worst case aging/lifetime degradation. Similarly, on multicore processors, shared voltage rails tend to limit the peak performance achievable in low thread count workloads. In this paper, we describe five power management techniques that maximize the per-part performance under the before-mentionedmore » constraints. Using these techniques, we demonstrate a net performance increase of up to 15% depending on the application and TDP of the SoC, implemented on 'Bristol Ridge,' a 28-nm CMOS, dual-core x 86 accelerated processing unit.« less

  15. VEGA Launch Vehicle Dynamic Environment: Flight Experience and Qualification Status

    NASA Astrophysics Data System (ADS)

    Di Trapani, C.; Fotino, D.; Mastrella, E.; Bartoccini, D.; Bonnet, M.

    2014-06-01

    VEGA Launch Vehicle (LV) during flight is equipped with more than 400 sensors (pressure transducers, accelerometers, microphones, strain gauges...) aimed to catch the physical phenomena occurring during the mission. Main objective of these sensors is to verify that the flight conditions are compliant with the launch vehicle and satellite qualification status and to characterize the phenomena that occur during flight. During VEGA development, several test campaigns have been performed in order to characterize its dynamic environment and identify the worst case conditions, but only with the flight data analysis is possible to confirm the worst cases identified and check the compliance of the operative life conditions with the components qualification status.Scope of the present paper is to show a comparison of the sinusoidal dynamic phenomena that occurred during VEGA first and second flight and give a summary of the launch vehicle qualification status.

  16. The Effect of Reaction Control System Thruster Plume Impingement on Orion Service Module Solar Array Power Production

    NASA Astrophysics Data System (ADS)

    Bury, Kristen M.; Kerslake, Thomas W.

    2008-06-01

    NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.

  17. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  18. Statistical analysis of QC data and estimation of fuel rod behaviour

    NASA Astrophysics Data System (ADS)

    Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.

    1991-02-01

    The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.

  19. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  20. Scheduling policies of intelligent sensors and sensor/actuators in flexible structures

    NASA Astrophysics Data System (ADS)

    Demetriou, Michael A.; Potami, Raffaele

    2006-03-01

    In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.

  1. Optimizing Processes to Minimize Risk

    NASA Technical Reports Server (NTRS)

    Loyd, David

    2017-01-01

    NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.

  2. ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?

    PubMed

    La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim

    2014-10-01

    Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.

  3. A learning approach to the bandwidth multicolouring problem

    NASA Astrophysics Data System (ADS)

    Akbari Torkestani, Javad

    2016-05-01

    In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.

  4. Learning Search Control Knowledge for Deep Space Network Scheduling

    NASA Technical Reports Server (NTRS)

    Gratch, Jonathan; Chien, Steve; DeJong, Gerald

    1993-01-01

    While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.

  5. Availability Analysis of Dual Mode Systems

    DOT National Transportation Integrated Search

    1974-04-01

    The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...

  6. Topical Backgrounder: Evaluating Chemical Hazards in the Community: Using RMP's Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Part of a May 1999 series on the Risk Management Program Rule and issues related to chemical emergency management. Explains hazard versus risk, worst-case and alternative release scenarios, flammable endpoints and toxic endpoints.

  7. General RMP Guidance - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    This chapter provides basic compliance information, not modeling methodologies, for people who plan to do their own air dispersion modeling. OCA is a required part of the risk management program, and involves worst-case and alternative release scenarios.

  8. INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS

    EPA Science Inventory

    The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...

  9. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... limits of current technology, for the range of environmental conditions anticipated at your facility; and... Society for Testing of Materials (ASTM) publication F625-94, Standard Practice for Describing...

  10. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., materials, support vessels, and strategies listed are suitable, within the limits of current technology, for... equipment. Examples of acceptable terms include those defined in American Society for Testing of Materials...

  11. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  12. Architectural Optimization of Digital Libraries

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  13. Characterization of Lunar Polar Illumination from a Power System Perspective

    NASA Technical Reports Server (NTRS)

    Fincannon, James

    2008-01-01

    This paper presents the results of illumination analyses for the lunar south and north pole regions obtained using an independently developed analytical tool and two types of digital elevation models (DEM). One DEM was based on radar height data from Earth observations of the lunar surface and the other was a combination of the radar data with a separate dataset generated using Clementine spacecraft stereo imagery. The analysis tool enables the assessment of illumination at most locations in the lunar polar regions for any time and any year. Maps are presented for both lunar poles for the worst case winter period (the critical power system design and planning bottleneck) and for the more favorable best case summer period. Average illumination maps are presented to help understand general topographic trends over the regions. Energy storage duration maps are presented to assist in power system design. Average illumination fraction, energy storage duration, solar/horizon terrain elevation profiles and illumination fraction profiles are presented for favorable lunar north and south pole sites which have the potential for manned or unmanned spacecraft operations. The format of the data is oriented for use by power system designers to develop mass optimized solar and energy storage systems.

  14. Characteristics of worst hour rainfall rate for radio wave propagation modelling in Nigeria

    NASA Astrophysics Data System (ADS)

    Osita, Ibe; Nymphas, E. F.

    2017-10-01

    Radio waves especially at the millimeter-wave band are known to be attenuated by rain. Radio engineers and designers need to be able to predict the time of the day when radio signal will be attenuated so as to provide measures to mitigate this effect. This is achieved by characterizing the rainfall intensity for a particular region of interest into worst month and worst hour of the day. This paper characterized rainfall in Nigeria into worst year, worst month, and worst hour. It is shown that for the period of study, 2008 and 2009 are the worst years, while September is the most frequent worst month in most of the stations. The evening time (LT) is the worst hours of the day in virtually all the stations.

  15. Stressful life events and catechol-O-methyl-transferase (COMT) gene in bipolar disorder.

    PubMed

    Hosang, Georgina M; Fisher, Helen L; Cohen-Woods, Sarah; McGuffin, Peter; Farmer, Anne E

    2017-05-01

    A small body of research suggests that gene-environment interactions play an important role in the development of bipolar disorder. The aim of the present study is to contribute to this work by exploring the relationship between stressful life events and the catechol-O-methyl-transferase (COMT) Val 158 Met polymorphism in bipolar disorder. Four hundred eighty-two bipolar cases and 205 psychiatrically healthy controls completed the List of Threatening Experiences Questionnaire. Bipolar cases reported the events experienced 6 months before their worst depressive and manic episodes; controls reported those events experienced 6 months prior to their interview. The genotypic information for the COMT Val 158 Met variant (rs4680) was extracted from GWAS analysis of the sample. The impact of stressful life events was moderated by the COMT genotype for the worst depressive episode using a Val dominant model (adjusted risk difference = 0.09, 95% confidence intervals = 0.003-0.18, P = .04). For the worst manic episodes no significant interactions between COMT and stressful life events were detected. This is the first study to explore the relationship between stressful life events and the COMT Val 158 Met polymorphism focusing solely on bipolar disorder. The results of this study highlight the importance of the interplay between genetic and environmental factors for bipolar depression. © 2017 Wiley Periodicals, Inc.

  16. "Just Let the Worst Students Go": A Critical Case Analysis of Public Discourse about Race, Merit, and Worth

    ERIC Educational Resources Information Center

    Zirkel, Sabrina; Pollack, Terry M.

    2016-01-01

    We present a case analysis of the controversy and public debate generated from a school district's efforts to address racial inequities in educational outcomes by diverting special funds from the highest performing students seeking elite college admissions to the lowest performing students who were struggling to graduate from high school.…

  17. Beyond the Moscow Treaty: Alternative Perspectives on the Future Roles and Utility of Nuclear Weapons

    DTIC Science & Technology

    2008-03-01

    Adversarial Tripolarity ................................................................................... VII-1 VIII. Fallen Nuclear Dominoes...power dimension, it is possible to imagine a best case (deep concert) and a worst case (adversarial tripolarity ) and some less extreme outcomes, one...vanquished and the sub-regions have settled into relative stability). 5. Adversarial U.S.-Russia-China tripolarity : In this world, the regional

  18. The Best of Times and the Worst of Times: Research Managed as a Performance Economy--The Australian Case. ASHE Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Marginson, Simon

    This study examined the character of the emerging systems of corporate management in Australian universities and their effects on academic and administrative practices, focusing on relations of power. Case studies were conducted at 17 individual universities of various types. In each institution, interviews were conducted with senior…

  19. Elementary Social Studies in 2005: Danger or Opportunity?--A Response to Jeff Passe

    ERIC Educational Resources Information Center

    Libresco, Andrea S.

    2006-01-01

    From the emphasis on lower-level test-prep materials to the disappearance of the subject altogether, elementary social studies is, in the best case scenario, being tested and, thus, taught with a heavy emphasis on recall; and, in the worst-case scenario, not being taught at all. In this article, the author responds to Jeff Passe's views on…

  20. Thermal Analysis of a Metallic Wing Glove for a Mach-8 Boundary-Layer Experiment

    NASA Technical Reports Server (NTRS)

    Gong, Leslie; Richards, W. Lance

    1998-01-01

    A metallic 'glove' structure has been built and attached to the wing of the Pegasus(trademark) space booster. An experiment on the upper surface of the glove has been designed to help validate boundary-layer stability codes in a free-flight environment. Three-dimensional thermal analyses have been performed to ensure that the glove structure design would be within allowable temperature limits in the experiment test section of the upper skin of the glove. Temperature results obtained from the design-case analysis show a peak temperature at the leading edge of 490 F. For the upper surface of the glove, approximately 3 in. back from the leading edge, temperature calculations indicate transition occurs at approximately 45 sec into the flight profile. A worst-case heating analysis has also been performed to ensure that the glove structure would not have any detrimental effects on the primary objective of the Pegasus a launch. A peak temperature of 805 F has been calculated on the leading edge of the glove structure. The temperatures predicted from the design case are well within the temperature limits of the glove structure, and the worst-case heating analysis temperature results are acceptable for the mission objectives.

  1. Extrapolating target tracks

    NASA Astrophysics Data System (ADS)

    Van Zandt, James R.

    2012-05-01

    Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.

  2. Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-08-01

    We report on a comprehensive all-sky search for periodic gravitational waves in the frequency band 100-1500 Hz and with a frequency time derivative in the range of [-1.18 ,+1.00 ] ×1 0-8 Hz /s . Such a signal could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our galaxy. This search uses the data from the initial LIGO sixth science run and covers a larger parameter space with respect to any past search. A Loosely Coherent detection pipeline was applied to follow up weak outliers in both Gaussian (95% recovery rate) and non-Gaussian (75% recovery rate) bands. No gravitational wave signals were observed, and upper limits were placed on their strength. Our smallest upper limit on worst-case (linearly polarized) strain amplitude h0 is 9.7 ×1 0-25 near 169 Hz, while at the high end of our frequency range we achieve a worst-case upper limit of 5.5 ×1 0-24 . Both cases refer to all sky locations and entire range of frequency derivative values.

  3. Zika virus in French Polynesia 2013-14: anatomy of a completed outbreak.

    PubMed

    Musso, Didier; Bossin, Hervé; Mallet, Henri Pierre; Besnard, Marianne; Broult, Julien; Baudouin, Laure; Levi, José Eduardo; Sabino, Ester C; Ghawche, Frederic; Lanteri, Marion C; Baud, David

    2018-05-01

    The Zika virus crisis exemplified the risk associated with emerging pathogens and was a reminder that preparedness for the worst-case scenario, although challenging, is needed. Herein, we review all data reported during the unexpected emergence of Zika virus in French Polynesia in late 2013. We focus on the new findings reported during this outbreak, especially the first description of severe neurological complications in adults and the retrospective description of CNS malformations in neonates, the isolation of Zika virus in semen, the potential for blood-transfusion transmission, mother-to-child transmission, and the development of new diagnostic assays. We describe the effect of this outbreak on health systems, the implementation of vector-borne control strategies, and the line of communication used to alert the international community of the new risk associated with Zika virus. This outbreak highlighted the need for careful monitoring of all unexpected events that occur during an emergence, to implement surveillance and research programmes in parallel to management of cases, and to be prepared to the worst-case scenario. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. [Diagnosis and the technology for optimizing the medical support of a troop unit].

    PubMed

    Korshever, N G; Polkovov, S V; Lavrinenko, O V; Krupnov, P A; Anastasov, K N

    2000-05-01

    The work is devoted to investigation of the system of military unit medical support with the use of principles and states of organizational diagnosis; development of the method allowing to assess its functional activity; and determination of optimization trends. Basing on the conducted organizational diagnosis and expert inquiry the informative criteria were determined which characterize the stages of functioning of the military unit medical support system. To evaluate the success of military unit medical support the complex multi-criteria pattern was developed and algorithm of this process optimization was substantiated. Using the results obtained, particularly realization of principles and states of decision taking theory in machine program it is possible to solve more complex problem of comparison between any number of military units: to dispose them according to priority decrease; to select the programmed number of the best and worst; to determine the trends of activity optimization in corresponding medical service personnel.

  5. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    PubMed Central

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  6. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  7. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  8. Experimental investigation of a 0.15 scale model of a conformal variable-ramp inlet for the F-16 airplane

    NASA Technical Reports Server (NTRS)

    Hawkins, J. E.

    1980-01-01

    A 0.15 scale model of a proposed conformal variable-ramp inlet for the Multirole Fighter was tested from Mach 0.8 to 2.2 at a wide range of angles of attack and sideslip. Inlet ramp angle was varied to optimize ramp angle as a function of engine airflow, Mach number, angle of attack, and angle of sideslip. Several inlet configuration options were investigated to study their effects on inlet operation and to establish the final flight configuration. These variations were cowl sidewall cutback, cowl lip bluntness, boundary layer bleed, and first-ramp leading edge shape. Diagnostic and engine face instrumentation were used to evaluate inlet operation at various inlet stations and at the inlet/engine interface. Pressure recovery and stability of the inlet were satisfactory for the proposed application. On the basis of an engine stability audit of the worst-case instantaneous distortion patterns, no inlet/engine compatibility problems are expected for normal operations.

  9. Fast marching methods for the continuous traveling salesman problem.

    PubMed

    Andrews, June; Sethian, J A

    2007-01-23

    We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ("cities") in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M.N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.

  10. Probability Quantization for Multiplication-Free Binary Arithmetic Coding

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.

  11. Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.

    DOT National Transportation Integrated Search

    2013-06-01

    Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...

  12. Global climate change: The quantifiable sustainability challenge

    EPA Science Inventory

    Population growth and the pressures spawned by increasing demands for energy and resource-intensive goods, foods and services are driving unsustainable growth in greenhouse gas (GHG) emissions. Recent GHG emission trends are consistent with worst-case scenarios of the previous de...

  13. Sidelobe reduction and capacity improvement of open-loop collaborative beamforming in wireless sensor networks

    PubMed Central

    2017-01-01

    Collaborative beamforming (CBF) with a finite number of collaborating nodes (CNs) produces sidelobes that are highly dependent on the collaborating nodes’ locations. The sidelobes cause interference and affect the communication rate of unintended receivers located within the transmission range. Nulling is not possible in an open-loop CBF since the collaborating nodes are unable to receive feedback from the receivers. Hence, the overall sidelobe reduction is required to avoid interference in the directions of the unintended receivers. However, the impact of sidelobe reduction on the capacity improvement at the unintended receiver has never been reported in previous works. In this paper, the effect of peak sidelobe (PSL) reduction in CBF on the capacity of an unintended receiver is analyzed. Three meta-heuristic optimization methods are applied to perform PSL minimization, namely genetic algorithm (GA), particle swarm algorithm (PSO) and a simplified version of the PSO called the weightless swarm algorithm (WSA). An average reduction of 20 dB in PSL alongside 162% capacity improvement is achieved in the worst case scenario with the WSA optimization. It is discovered that the PSL minimization in the CBF provides capacity improvement at an unintended receiver only if the CBF cluster is small and dense. PMID:28464000

  14. Robust Unit Commitment Considering Uncertain Demand Response

    DOE PAGES

    Liu, Guodong; Tomsovic, Kevin

    2014-09-28

    Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to themore » uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.« less

  15. Improved Stability of a Model IgG3 by DoE-Based Evaluation of Buffer Formulations

    DOE PAGES

    Chavez, Brittany K.; Agarabi, Cyrus D.; Read, Erik K.; ...

    2016-01-01

    Formulating appropriate storage conditions for biopharmaceutical proteins is essential for ensuring their stability and thereby their purity, potency, and safety over their shelf-life. Using a model murine IgG3 produced in a bioreactor system, multiple formulation compositions were systematically explored in a DoE design to optimize the stability of a challenging antibody formulation worst case. The stability of the antibody in each buffer formulation was assessed by UV/VIS absorbance at 280 nm and 410 nm and size exclusion high performance liquid chromatography (SEC) to determine overall solubility, opalescence, and aggregate formation, respectively. Upon preliminary testing, acetate was eliminated as a potentialmore » storage buffer due to significant visible precipitate formation. An additional 2 4full factorial DoE was performed that combined the stabilizing effect of arginine with the buffering capacity of histidine. From this final DoE, an optimized formulation of 200 mM arginine, 50 mM histidine, and 100 mM NaCl at a pH of 6.5 was identified to substantially improve stability under long-term storage conditions and after multiple freeze/thaw cycles. Therefore, our data highlights the power of DoE based formulation screening approaches even for challenging monoclonal antibody molecules.« less

  16. Initial FDG-PET/CT predicts survival in adults Ewing sarcoma family of tumors

    PubMed Central

    Jamet, Bastien; Carlier, Thomas; Campion, Loic; Bompas, Emmanuelle; Girault, Sylvie; Borrely, Fanny; Ferrer, Ludovic; Rousseau, Maxime; Venel, Yann; Kraeber-Bodéré, Françoise; Rousseau, Caroline

    2017-01-01

    Purpose The aim of this retrospective study was to determine, at baseline, the prognostic value of different FDG-PET/CT quantitative parameters in a homogenous Ewing Sarcoma Family of Tumors (ESFT) adult population, compared with clinically relevant prognostic factors. Methods Adult patients from 3 oncological centers, all with proved ESFT, were retrospectively included. Quantitative FDG-PET/CT parameters (SUV (maximum, peak and mean), metabolic tumor volume (MTV) and total lesion glycolysis (TLG) of the primary lesion of each patient were recorded before treatment, as well as usual clinical prognostic factors (stage of disease, location, tumor size, gender and age). Then, their relation with progression free survival (PFS) and overall survival (OS) was evaluated. Results 32 patients were included. Median age was 21 years (range, 15 to 61). Nineteen patients (59%) were initially metastatic. On multivariate analysis, high SUVmax remained independent predictor of worst OS (p=0.02) and PFS (p=0.019), metastatic disease of worst PFS (p=0.01) and high SUVpeak of worst OS (p=0.01). Optimal prognostic cut-off of SUVpeak was found at 12.5 in multivariate analyses for PFS and OS (p=0.0001). Conclusions FDG-PET/CT, recommended at ESFT diagnosis for initial staging, can be a useful tool for predicting long-term adult patients outcome through semi-quantitative parameters. PMID:29100369

  17. Programmable Logic Application Notes

    NASA Technical Reports Server (NTRS)

    Katz, Richard

    2000-01-01

    This column will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will start a series of notes concentrating on analysis techniques with this issues section discussing worst-case analysis requirements.

  18. EPHECT I: European household survey on domestic use of consumer products and development of worst-case scenarios for daily use.

    PubMed

    Dimitroulopoulou, C; Lucica, E; Johnson, A; Ashmore, M R; Sakellaris, I; Stranger, M; Goelen, E

    2015-12-01

    Consumer products are frequently and regularly used in the domestic environment. Realistic estimates for product use are required for exposure modelling and health risk assessment. This paper provides significant data that can be used as input for such modelling studies. A European survey was conducted, within the framework of the DG Sanco-funded EPHECT project, on the household use of 15 consumer products. These products are all-purpose cleaners, kitchen cleaners, floor cleaners, glass and window cleaners, bathroom cleaners, furniture and floor polish products, combustible air fresheners, spray air fresheners, electric air fresheners, passive air fresheners, coating products for leather and textiles, hair styling products, spray deodorants and perfumes. The analysis of the results from the household survey (1st phase) focused on identifying consumer behaviour patterns (selection criteria, frequency of use, quantities, period of use and ventilation conditions during product use). This can provide valuable input to modelling studies, as this information is not reported in the open literature. The above results were further analysed (2nd phase), to provide the basis for the development of 'most representative worst-case scenarios' regarding the use of the 15 products by home-based population groups (housekeepers and retired people), in four geographical regions in Europe. These scenarios will be used for the exposure and health risk assessment within the EPHECT project. To the best of our knowledge, it is the first time that daily worst-case scenarios are presented in the scientific published literature concerning the use of a wide range of 15 consumer products across Europe. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  19. Analysis of Separation Corridors for Visiting Vehicles from the International Space Station

    NASA Technical Reports Server (NTRS)

    Zaczek, Mariusz P.; Schrock, Rita R.; Schrock, Mark B.; Lowman, Bryan C.

    2011-01-01

    The International Space Station (ISS) is a very dynamic vehicle with many operational constraints that affect its performance, operations, and vehicle lifetime. Most constraints are designed to alleviate various safety concerns that are a result of dynamic activities between the ISS and various Visiting Vehicles (VVs). One such constraint that has been in place for Russian Vehicle (RV) operations is the limitation placed on Solar Array (SA) positioning in order to prevent collisions during separation and subsequent relative motion of VVs. An unintended consequence of the SA constraint has been the impacts to the operational flexibility of the ISS resulting from the reduced power generation capability as well as from a reduction in the operational lifetime of various SA components. The purpose of this paper is to discuss the technique and the analysis that were applied in order to relax the SA constraints for RV undockings, thereby improving both the ISS operational flexibility and extending its lifetime for many years to come. This analysis focused on the effects of the dynamic motion that occur both prior to and following RV separations. The analysis involved a parametric approach in the conservative application of various initial conditions and assumptions. These included the use of the worst case minimum and maximum vehicle configurations, worst case initial attitudes and attitude rates, and the worst case docking port separation dynamics. Separations were calculated for multiple ISS docking ports, at varied deviations from the nominal undocking attitudes and included the use of two separate attitude control schemes: continuous free-drift and a post separation attitude hold. The analysis required numerical propagation of both the separation motion and the vehicle attitudes using 3-degree-of-freedom (DOF) relative motion equations coupled with rigid body rotational dynamics to generate a large set of separation trajectories.

  20. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Ecological risk estimation of organophosphorus pesticides in riverine ecosystems.

    PubMed

    Wee, Sze Yee; Aris, Ahmad Zaharin

    2017-12-01

    Pesticides are of great concern because of their existence in ecosystems at trace concentrations. Worldwide pesticide use and its ecological impacts (i.e., altered environmental distribution and toxicity of pesticides) have increased over time. Exposure and toxicity studies are vital for reducing the extent of pesticide exposure and risk to the environment and humans. Regional regulatory actions may be less relevant in some regions because the contamination and distribution of pesticides vary across regions and countries. The risk quotient (RQ) method was applied to assess the potential risk of organophosphorus pesticides (OPPs), primarily focusing on riverine ecosystems. Using the available ecotoxicity data, aquatic risks from OPPs (diazinon and chlorpyrifos) in the surface water of the Langat River, Selangor, Malaysia were evaluated based on general (RQ m ) and worst-case (RQ ex ) scenarios. Since the ecotoxicity of quinalphos has not been well established, quinalphos was excluded from the risk assessment. The calculated RQs indicate medium risk (RQ m  = 0.17 and RQ ex  = 0.66; 0.1 ≤ RQ < 1) of overall diazinon. The overall chlorpyrifos exposure was observed at high risk (RQ ≥ 1) based on RQ m and RQ ex at 1.44 and 4.83, respectively. A contradictory trend of RQs > 1 (high risk) was observed for both the general and worst cases of chlorpyrifos, but only for the worst cases of diazinon at all sites from downstream to upstream regions. Thus, chlorpyrifos posed a higher risk than diazinon along the Langat River, suggesting that organisms and humans could be exposed to potentially high levels of OPPs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach.

    PubMed

    Zakov, Shay; Tsur, Dekel; Ziv-Ukelson, Michal

    2011-08-18

    RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.

  3. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach

    PubMed Central

    2011-01-01

    Background RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. Results We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. Conclusions The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms. PMID:21851589

  4. Indoor exposure to toluene from printed matter matters: complementary views from life cycle assessment and risk assessment.

    PubMed

    Walser, Tobias; Juraske, Ronnie; Demou, Evangelia; Hellweg, Stefanie

    2014-01-01

    A pronounced presence of toluene from rotogravure printed matter has been frequently observed indoors. However, its consequences to human health in the life cycle of magazines are poorly known. Therefore, we quantified human-health risks in indoor environments with Risk Assessment (RA) and impacts relative to the total impact of toxic releases occurring in the life cycle of a magazine with Life Cycle Assessment (LCA). We used a one-box indoor model to estimate toluene concentrations in printing facilities, newsstands, and residences in a best, average, and worst-case scenario. The modeled concentrations are in the range of the values measured in on-site campaigns. Toluene concentrations can be close or even surpass the occupational legal thresholds in printing facilities in realistic worst-case scenarios. The concentrations in homes can surpass the US EPA reference dose (69 μg/kg/day) in worst-case scenarios, but are still at least 1 order of magnitude lower than in press rooms or newsstands. However, toluene inhaled at home becomes the dominant contribution to the total potential human toxicity impacts of toluene from printed matter when assessed with LCA, using the USEtox method complemented with indoor characterization factors for toluene. The significant contribution (44%) of toluene exposure in production, retail, and use in households, to the total life cycle impact of a magazine in the category of human toxicity, demonstrates that the indoor compartment requires particular attention in LCA. While RA works with threshold levels, LCA assumes that every toxic emission causes an incremental change to the total impact. Here, the combination of the two paradigms provides valuable information on the life cycle stages of printed matter.

  5. Level II scour analysis for Bridge 21 (MIDBTH00230021) on Town Highway 23, crossing the Middlebury River, Middlebury, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.; Degnan, James R.

    1997-01-01

    year discharges. In addition, the incipient roadway-overtopping discharge is determined and analyzed as another potential worst-case scour scenario. Total scour at a highway crossing is comprised of three components: 1) long-term streambed degradation; 2) contraction scour (due to accelerated flow caused by a reduction in flow area at a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute depths for contraction and local scour and a summary of the results of these computations follows. Contraction scour for all modelled flows ranged from 1.2 to 1.8 feet. The worst-case contraction scour occurred at the incipient overtopping discharge, which is less than the 500-year discharge. Abutment scour ranged from 17.7 to 23.7 feet. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  6. Poison ivy - oak - sumac rash

    MedlinePlus

    ... reaction can vary from mild to severe. In rare cases, the person with the rash needs to be treated in the hospital. The worst symptoms are often seen during days 4 to 7 after coming in contact with the plant. The rash may last for 1 to 3 ...

  7. Closed Environment Module - Modularization and extension of the Virtual Habitat

    NASA Astrophysics Data System (ADS)

    Plötner, Peter; Czupalla, Markus; Zhukov, Anton

    2013-12-01

    The Virtual Habitat (V-HAB), is a Life Support System (LSS) simulation, created to perform dynamic simulation of LSS's for future human spaceflight missions. It allows the testing of LSS robustness by means of computer simulations, e.g. of worst case scenarios.

  8. 49 CFR 238.431 - Brake system.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow... a brake rate consistent with prevailing adhesion, passenger safety, and brake system thermal... adhesion control system designed to automatically adjust the braking force on each wheel to prevent sliding...

  9. 40 CFR 300.135 - Response operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY... discharge is a worst case discharge as discussed in § 300.324; the pathways to human and environmental exposure; the potential impact on human health, welfare, and safety and the environment; whether the...

  10. Management of reliability and maintainability; a disciplined approach to fleet readiness

    NASA Technical Reports Server (NTRS)

    Willoughby, W. J., Jr.

    1981-01-01

    Material acquisition fundamentals were reviewed and include: mission profile definition, stress analysis, derating criteria, circuit reliability, failure modes, and worst case analysis. Military system reliability was examined with emphasis on the sparing of equipment. The Navy's organizational strategy for 1980 is presented.

  11. Empirical Modeling Of Single-Event Upset

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Smith, Lawrence S.; Soli, George A.; Thieberger, Peter; Smith, Stephen L.; Atwood, Gregory E.

    1988-01-01

    Experimental study presents examples of empirical modeling of single-event upset in negatively-doped-source/drain metal-oxide-semiconductor static random-access memory cells. Data supports adoption of simplified worst-case model in which cross sectionof SEU by ion above threshold energy equals area of memory cell.

  12. A General Safety Assessment for Purified Food Ingredients Derived From Biotechnology Crops: Case Study of Brazilian Sugar and Beverages Produced From Insect-Protected Sugarcane.

    PubMed

    Kennedy, Reese D; Cheavegatti-Gianotto, Adriana; de Oliveira, Wladecir S; Lirette, Ronald P; Hjelle, Jerry J

    2018-01-01

    Insect-protected sugarcane that expresses Cry1Ab has been developed in Brazil. Analysis of trade information has shown that effectively all the sugarcane-derived Brazilian exports are raw or refined sugar and ethanol. The fact that raw and refined sugar are highly purified food ingredients, with no detectable transgenic protein, provides an interesting case study of a generalized safety assessment approach. In this study, both the theoretical protein intakes and safety assessments of Cry1Ab, Cry1Ac, NPTII, and Bar proteins used in insect-protected biotechnology crops were examined. The potential consumption of these proteins was examined using local market research data of average added sugar intakes in eight diverse and representative Brazilian raw and refined sugar export markets (Brazil, Canada, China, Indonesia, India, Japan, Russia, and the USA). The average sugar intakes, which ranged from 5.1 g of added sugar/person/day (India) to 126 g sugar/p/day (USA) were used to calculated possible human exposure. The theoretical protein intake estimates were carried out in the "Worst-case" scenario, assumed that 1 μg of newly-expressed protein is detected/g of raw or refined sugar; and the "Reasonable-case" scenario assumed 1 ng protein/g sugar. The "Worst-case" scenario was based on results of detailed studies of sugarcane processing in Brazil that showed that refined sugar contains less than 1 μg of total plant protein /g refined sugar. The "Reasonable-case" scenario was based on assumption that the expression levels in stalk of newly-expressed proteins were less than 0.1% of total stalk protein. Using these calculated protein intake values from the consumption of sugar, along with the accepted NOAEL levels of the four representative proteins we concluded that safety margins for the "Worst-case" scenario ranged from 6.9 × 10 5 to 5.9 × 10 7 and for the "Reasonable-case" scenario ranged from 6.9 × 10 8 to 5.9 × 10 10 . These safety margins are very high due to the extremely low possible exposures and the high NOAELs for these non-toxic proteins. This generalized approach to the safety assessment of highly purified food ingredients like sugar illustrates that sugar processed from Brazilian GM varieties are safe for consumption in representative markets globally.

  13. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  14. Full band all-sky search for periodic gravitational waves in the O1 LIGO data

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allen, G.; Allocca, A.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Angelova, S. V.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Atallah, D. V.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Austin, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barkett, K.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bawaj, M.; Bayley, J. C.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Bero, J. J.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Biscoveanu, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonilla, E.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bossie, K.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderón; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerdá-Durán, P.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chase, E.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, X.; Chen, Y.; Cheng, H.-P.; Chia, H. Y.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciecielag, P.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Clearwater, P.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Cohen, D.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Cordero-Carrión, I.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, E. T.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Dálya, G.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Demos, N.; Denker, T.; Dent, T.; De Pietri, R.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; De Rossi, C.; DeSalvo, R.; de Varona, O.; Devenson, J.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorosh, O.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Dreissigacker, C.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dupej, P.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Estevez, D.; Etienne, Z. B.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fee, C.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Finstad, D.; Fiori, I.; Fiorucci, D.; Fishbach, M.; Fisher, R. P.; Fitz-Axen, M.; Flaminio, R.; Fletcher, M.; Fong, H.; Font, J. A.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garcia-Quiros, C.; Garufi, F.; Gateley, B.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; Goncharov, B.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Gretarsson, E. M.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Halim, O.; Hall, B. R.; Hall, E. D.; Hamilton, E. Z.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinderer, T.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hreibi, A.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kamai, B.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, K.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kinley-Hanlon, M.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Knowles, T. D.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Linker, S. D.; Littenberg, T. B.; Liu, J.; Lo, R. K. L.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macas, R.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Markowitz, A.; Maros, E.; Marquina, A.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Mason, K.; Massera, E.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McNeill, L.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, B. B.; Miller, J.; Millhouse, M.; Milovich-Goff, M. C.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moffa, D.; Moggi, A.; Mogushi, K.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muñiz, E. A.; Muratore, M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Neilson, J.; Nelemans, G.; Nelson, T. J. N.; Nery, M.; Neunzert, A.; Nevin, L.; Newport, J. M.; Newton, G.; Ng, K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; North, C.; Nuttall, L. K.; Oberling, J.; O'Dea, G. D.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Okada, M. A.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ossokine, S.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, Howard; Pan, Huang-Wei; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Parida, A.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patil, M.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pirello, M.; Pisarski, A.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Pratten, G.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rajbhandari, B.; Rakhmanov, M.; Ramirez, K. E.; Ramos-Buades, A.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Ren, W.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Rutins, G.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sanchez, L. E.; Sanchis-Gual, N.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheel, M.; Scheuer, J.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shaner, M. B.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, L. P.; Singh, A.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Somala, S.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staats, K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stevenson, S. P.; Stone, R.; Stops, D. J.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Strunk, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Suresh, J.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Tait, S. C.; Talbot, C.; Talukder, D.; Tanner, D. B.; Tao, D.; Tápai, M.; Taracchini, A.; Tasson, J. D.; Taylor, J. A.; Taylor, R.; Tewari, S. V.; Theeg, T.; Thies, F.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torres-Forné, A.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tsukada, L.; Tsuna, D.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, W. H.; Wang, Y. F.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westerweck, J.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Wilken, D.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wysocki, D. M.; Xiao, S.; Yamamoto, H.; Yancey, C. C.; Yang, L.; Yap, M. J.; Yazback, M.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadroźny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2018-05-01

    We report on a new all-sky search for periodic gravitational waves in the frequency band 475-2000 Hz and with a frequency time derivative in the range of [-1.0 ,+0.1 ] ×1 0-8 Hz /s . Potential signals could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our Galaxy. This search uses the data from Advanced LIGO's first observational run O1. No gravitational-wave signals were observed, and upper limits were placed on their strengths. For completeness, results from the separately published low-frequency search 20-475 Hz are included as well. Our lowest upper limit on worst-case (linearly polarized) strain amplitude h0 is ˜4 ×1 0-25 near 170 Hz, while at the high end of our frequency range, we achieve a worst-case upper limit of 1.3 ×1 0-24. For a circularly polarized source (most favorable orientation), the smallest upper limit obtained is ˜1.5 ×1 0-25.

  15. Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1994-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.

  16. Correct consideration of the index of refraction using blackbody radiation.

    PubMed

    Hartmann, Jurgen

    2006-09-04

    The correct consideration of the index of refraction when using blackbody radiators as standard sources for optical radiation is derived and discussed. It is shown that simply using the index of refraction of air at laboratory conditions is not sufficient. A combination of the index of refraction of the media used inside the blackbody radiator and for the optical path between blackbody and detector has to be used instead. A worst case approximation for the introduced error when neglecting these effects is presented, showing that the error is below 0.1 % for wavelengths above 200 nm. Nevertheless, for the determination of the spectral radiance for the purpose of radiation temperature measurements the correct consideration of the refractive index is mandatory. The worst case estimation reveals that the introduced error in temperature at a blackbody temperature of 3000 degrees C can be as high as 400 mk at a wavelength of 650 nm and even higher at longer wavelengths.

  17. Thermal-hydraulic analysis of N Reactor graphite and shield cooling system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, J.O.; Schmitt, B.E.

    1988-02-01

    A series of bounding (worst-case) calculations were performed using a detailed hydrodynamic RELAP5 model of the N Reactor graphite and shield cooling system (GSCS). These calculations were specifically aimed to answer issues raised by the Westinghouse Independent Safety Review (WISR) committee. These questions address the operability of the GSCS during a worst-case degraded-core accident that requires the GDCS to mitigate the consequences of the accident. An accident scenario previously developed was designed as the hydrogen-mitigation design-basis accident (HMDBA). Previous HMDBA heat transfer analysis,, using the TRUMP-BD code, was used to define the thermal boundary conditions that the GSDS may bemore » exposed to. These TRUMP/HMDBA analysis results were used to define the bounding operating conditions of the GSCS during the course of an HMDBA transient. Nominal and degraded GSCS scenarios were investigated using RELAP5 within or at the bounds of the HMDBA transient. 10 refs., 42 figs., 10 tabs.« less

  18. Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift

    NASA Astrophysics Data System (ADS)

    Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.

    2012-01-01

    This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.

  19. Level II scour analysis for Bridge 37, (BRNETH00740037) on Town Highway 74, crossing South Peacham Brook, Barnet, Vermont

    USGS Publications Warehouse

    Burns, Ronda L.; Severance, Timothy

    1997-01-01

    Contraction scour for all modelled flows ranged from 15.8 to 22.5 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.7 to 11.1 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  20. A CMOS matrix for extracting MOSFET parameters before and after irradiation

    NASA Technical Reports Server (NTRS)

    Blaes, B. R.; Buehler, M. G.; Lin, Y.-S.; Hicks, K. A.

    1988-01-01

    An addressable matrix of 16 n- and 16 p-MOSFETs was designed to extract the dc MOSFET parameters for all dc gate bias conditions before and after irradiation. The matrix contains four sets of MOSFETs, each with four different geometries that can be biased independently. Thus the worst-case bias scenarios can be determined. The MOSFET matrix was fabricated at a silicon foundry using a radiation-soft CMOS p-well LOCOS process. Co-60 irradiation results for the n-MOSFETs showed a threshold-voltage shift of -3 mV/krad(Si), whereas the p-MOSFETs showed a shift of 21 mV/krad(Si). The worst-case threshold-voltage shift occurred for the n-MOSFETs, with a gate bias of 5 V during the anneal. For the p-MOSFETs, biasing did not affect the shift in the threshold voltage. A parasitic MOSFET dominated the leakage of the n-MOSFET biased with 5 V on the gate during irradiation. Co-60 test results for other parameters are also presented.

  1. Mad cows and computer models: the U.S. response to BSE.

    PubMed

    Ackerman, Frank; Johnecheck, Wendy A

    2008-01-01

    The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.

  2. Modelling the long-term evolution of worst-case Arctic oil spills.

    PubMed

    Blanken, Hauke; Tremblay, Louis Bruno; Gaskin, Susan; Slavin, Alexander

    2017-03-15

    We present worst-case assessments of contamination in sea ice and surface waters resulting from hypothetical well blowout oil spills at ten sites in the Arctic Ocean basin. Spill extents are estimated by considering Eulerian passive tracers in the surface ocean of the MITgcm (a hydrostatic, coupled ice-ocean model). Oil in sea ice, and contamination resulting from melting of oiled ice, is tracked using an offline Lagrangian scheme. Spills are initialized on November 1st 1980-2010 and tracked for one year. An average spill was transported 1100km and potentially affected 1.1 million km 2 . The direction and magnitude of simulated oil trajectories are consistent with known large-scale current and sea ice circulation patterns, and trajectories frequently cross international boundaries. The simulated trajectories of oil in sea ice match observed ice drift trajectories well. During the winter oil transport by drifting sea ice is more significant than transport with surface currents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Optimized micromirror arrays for adaptive optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalicek, M. Adrian

    This paper describes the design, layout, fabrication, and surface characterization of highly optimized surface micromachined micromirror devices. Design considerations and fabrication capabilities are presented. These devices are fabricated in the state-of-the-art, four-level, planarized, ultra-low-stress polysilicon process available at Sandia National Laboratories known as the Sandia Ultra-planar Multi-level MEMS Technology (SUMMiT). This enabling process permits the development of micromirror devices with near-ideal characteristics that have previously been unrealizable in standard three-layer polysilicon processes. The reduced 1 {mu}m minimum feature sizes and 0.1 {mu}m mask resolution make it possible to produce dense wiring patterns and irregularly shaped flexures. Likewise, mirror surfaces canmore » be uniquely distributed and segmented in advanced patterns and often irregular shapes in order to minimize wavefront error across the pupil. The ultra-low-stress polysilicon and planarized upper layer allow designers to make larger and more complex micromirrors of varying shape and surface area within an array while maintaining uniform performance of optical surfaces. Powerful layout functions of the AutoCAD editor simplify the design of advanced micromirror arrays and make it possible to optimize devices according to the capabilities of the fabrication process. Micromirrors fabricated in this process have demonstrated a surface variance across the array from only 2{endash}3 nm to a worst case of roughly 25 nm while boasting active surface areas of 98{percent} or better. Combining the process planarization with a {open_quotes}planarized-by-design{close_quotes} approach will produce micromirror array surfaces that are limited in flatness only by the surface deposition roughness of the structural material. Ultimately, the combination of advanced process and layout capabilities have permitted the fabrication of highly optimized micromirror arrays for adaptive optics. {copyright} {ital 1999 American Institute of Physics.}« less

  4. The effect of model uncertainty on cooperation in sensorimotor interactions

    PubMed Central

    Grau-Moya, J.; Hez, E.; Pezzulo, G.; Braun, D. A.

    2013-01-01

    Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions. PMID:23945266

  5. Multiple Microcomputer Control Algorithm.

    DTIC Science & Technology

    1979-09-01

    discrete and semaphore supervisor calls can be used with tasks in separate processors, in which case they are maintained in shared memory. Operations on ...the source or destination operand specifier of each mode in most cases . However, four of the 16 general register addressing modes and one of the 8 pro...instruction time is based on the specified usage factors and the best cast, and worst case execution times for the instruc- 1I 5 1NAVTRAEQZJ1PCrN M’.V7~j

  6. Investigation of the Human Response to Upper Torso Retraction with Weighted Helmets

    DTIC Science & Technology

    2013-09-01

    coverage of each test. The Kodak system is capable of recording high-speed motion up to a rate of 1000 frames per second. For this study , the video...the measured center-of-gravity (CG) of the worst- case test helmet fell outside the current limits and no injuries were observed, it can be stated...8 Figure 7. T-test Cases 1-9 (0 lb Added Helmet Weight

  7. Walsh Preprocessor.

    DTIC Science & Technology

    1980-08-01

    tile se(q uenw threshold does not utilize thle D)C level inlforiat ion and the time thlresliolditig adaptively adjusts for DC lvel . 𔃻This characteristic...lowest 256/8 = 32 elements. The above observation can be mathematically proven to also relate the fact that the lowest (NT/W) elements can, at worst case

  8. When Food Is a Foe.

    ERIC Educational Resources Information Center

    Fitzgerald, Patricia L.

    1998-01-01

    Although only 5% of the population has severe food allergies, school business officials must be prepared for the worst-case scenario. Banning foods and segregating allergic children are harmful practices. Education and sensible behavior are the best medicine when food allergies and intolerances are involved. Resources are listed. (MLH)

  9. Shuttle ECLSS ammonia delivery capability

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The possible effects of excessive requirements on ammonia flow rates required for entry cooling, due to extreme temperatures, on mission plans for the space shuttles, were investigated. An analysis of worst case conditions was performed, and indicates that adequate flow rates are available. No mission impact is therefore anticipated.

  10. 41 CFR 102-80.145 - What is meant by “flashover”?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...

  11. 41 CFR 102-80.145 - What is meant by “flashover”?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...

  12. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.

  13. Bifacial PV cell with reflector for stand-alone mast for sensor powering purposes

    NASA Astrophysics Data System (ADS)

    Jakobsen, Michael L.; Thorsteinsson, Sune; Poulsen, Peter B.; Riedel, N.; Rødder, Peter M.; Rødder, Kristin

    2017-09-01

    Reflectors to bifacial PV-cells are simulated and prototyped in this work. The aim is to optimize the reflector to specific latitudes, and particularly northern latitudes. Specifically, by using minimum semiconductor area the reflector must be able to deliver the electrical power required at the condition of minimum solar travel above the horizon, worst weather condition etc. We will test a bifacial PV-module with a retroreflector, and compare the output with simulations combined with local solar data.

  14. 40 CFR 266.106 - Standards to control metals emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...

  15. 40 CFR 266.106 - Standards to control metals emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...

  16. 49 CFR 238.431 - Brake system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Brake system. 238.431 Section 238.431... Equipment § 238.431 Brake system. (a) A passenger train's brake system shall be capable of stopping the... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow...

  17. Assessment of the Incentives Created by Public Disclosure of Off-Site Consequence Analysis Information for Reduction in the Risk of Accidental Releases

    EPA Pesticide Factsheets

    The off-site consequence analysis (OCA) evaluates the potential for worst-case and alternative accidental release scenarios to harm the public and environment around the facility. Public disclosure would likely reduce the number/severity of incidents.

  18. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  19. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  20. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  1. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  2. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  3. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  4. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  5. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  6. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  7. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  8. Competitive Strategies and Financial Performance of Small Colleges

    ERIC Educational Resources Information Center

    Barron, Thomas A., Jr.

    2017-01-01

    Many institutions of higher education are facing significant financial challenges, resulting in diminished economic viability and, in the worst cases, the threat of closure (Moody's Investor Services, 2015). The study was designed to explore the effectiveness of competitive strategies for small colleges in terms of financial performance. Five…

  9. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  10. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  11. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  12. 30 CFR 254.21 - How must I format my response plan?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... divide your response plan for OCS facilities into the sections specified in paragraph (b) and explained in the other sections of this subpart. The plan must have an easily found marker identifying each.... (ii) Contractual agreements. (iii) Worst case discharge scenario. (iv) Dispersant use plan. (v) In...

  13. Safety in the Chemical Laboratory: Laboratory Air Quality: Part I. A Concentration Model.

    ERIC Educational Resources Information Center

    Butcher, Samuel S.; And Others

    1985-01-01

    Offers a simple model for estimating vapor concentrations in instructional laboratories. Three methods are described for measuring ventilation rates, and the results of measurements in six laboratories are presented. The model should provide a simple screening tool for evaluating worst-case personal exposures. (JN)

  14. A Didactic Analysis of Functional Queues

    ERIC Educational Resources Information Center

    Rinderknecht, Christian

    2011-01-01

    When first introduced to the analysis of algorithms, students are taught how to assess the best and worst cases, whereas the mean and amortized costs are considered advanced topics, usually saved for graduates. When presenting the latter, aggregate analysis is explained first because it is the most intuitive kind of amortized analysis, often…

  15. Genetically modified crops and aquatic ecosystems: considerations for environmental risk assessment and non-target organism testing.

    PubMed

    Carstens, Keri; Anderson, Jennifer; Bachman, Pamela; De Schrijver, Adinda; Dively, Galen; Federici, Brian; Hamer, Mick; Gielkens, Marco; Jensen, Peter; Lamp, William; Rauschen, Stefan; Ridley, Geoff; Romeis, Jörg; Waggoner, Annabel

    2012-08-01

    Environmental risk assessments (ERA) support regulatory decisions for the commercial cultivation of genetically modified (GM) crops. The ERA for terrestrial agroecosystems is well-developed, whereas guidance for ERA of GM crops in aquatic ecosystems is not as well-defined. The purpose of this document is to demonstrate how comprehensive problem formulation can be used to develop a conceptual model and to identify potential exposure pathways, using Bacillus thuringiensis (Bt) maize as a case study. Within problem formulation, the insecticidal trait, the crop, the receiving environment, and protection goals were characterized, and a conceptual model was developed to identify routes through which aquatic organisms may be exposed to insecticidal proteins in maize tissue. Following a tiered approach for exposure assessment, worst-case exposures were estimated using standardized models, and factors mitigating exposure were described. Based on exposure estimates, shredders were identified as the functional group most likely to be exposed to insecticidal proteins. However, even using worst-case assumptions, the exposure of shredders to Bt maize was low and studies supporting the current risk assessments were deemed adequate. Determining if early tier toxicity studies are necessary to inform the risk assessment for a specific GM crop should be done on a case by case basis, and should be guided by thorough problem formulation and exposure assessment. The processes used to develop the Bt maize case study are intended to serve as a model for performing risk assessments on future traits and crops.

  16. Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gang, Lu

    Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.

  17. Voltage scheduling for low power/energy

    NASA Astrophysics Data System (ADS)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.

  18. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    NASA Astrophysics Data System (ADS)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  19. The floating knee: a review on ipsilateral femoral and tibial fractures

    PubMed Central

    Muñoz Vives, Josep; Bel, Jean-Christophe; Capel Agundez, Arantxa; Chana Rodríguez, Francisco; Palomo Traver, José; Schultz-Larsen, Morten; Tosounidis, Theodoros

    2016-01-01

    In 1975, Blake and McBryde established the concept of ‘floating knee’ to describe ipsilateral fractures of the femur and tibia.1 This combination is much more than a bone lesion; the mechanism is usually a high-energy trauma in a patient with multiple injuries and a myriad of other lesions. After initial evaluation patients should be categorised, and only stable patients should undergo immediate reduction and internal fixation with the rest receiving external fixation. Definitive internal fixation of both bones yields the best results in almost all series. Nailing of both bones is the optimal fixation when both fractures (femoral and tibial) are extra-articular. Plates are the ‘standard of care’ in cases with articular fractures. A combination of implants are required by 40% of floating knees. Associated ligamentous and meniscal lesions are common, but may be irrelevant in the case of an intra-articular fracture which gives the worst prognosis for this type of lesion. Cite this article: Muñoz Vives K, Bel J-C, Capel Agundez A, Chana Rodríguez F, Palomo Traver J, Schultz-Larsen M, Tosounidis, T. The floating knee. EFORT Open Rev 2016;1:375-382. DOI: 10.1302/2058-5241.1.000042. PMID:28461916

  20. In vivo RF powering for advanced biological research.

    PubMed

    Zimmerman, Mark D; Chaimanonart, Nattapon; Young, Darrin J

    2006-01-01

    An optimized remote powering architecture with a miniature and implantable RF power converter for an untethered small laboratory animal inside a cage is proposed. The proposed implantable device exhibits dimensions less than 6 mmx6 mmx1 mm, and a mass of 100 mg including a medical-grade silicon coating. The external system consists of a Class-E power amplifier driving a tuned 15 cmx25 cm external coil placed underneath the cage. The implant device is located in the animal's abdomen in a plane parallel to the external coil and utilizes inductive coupling to receive power from the external system. A half-wave rectifier rectifies the received AC voltage and passes the resulting DC current to a 2.5 kOmega resistor, which represents the loading of an implantable microsystem. An optimal operating point with respect to operating frequency and number of turns in each coil inductor was determined by analyzing the system efficiency. The determined optimal operating condition is based on a 4-turn external coil and a 20-turn internal coil operating at 4 MHz. With the Class-E amplifier consuming a constant power of 25 W, this operating condition is sufficient to supply a desired 3.2 V with 1.3 mA to the load over a cage size of 10 cmx20 cm with an animal tilting angle of up to 60 degrees, which is the worst case considered for the prototype design. A voltage regulator can be designed to regulate the received DC power to a stable supply for the bio-implant microsystem.

  1. Tissue Engineering Initiative

    DTIC Science & Technology

    2000-08-01

    forefoot with the foot in the neutral position, and (b) similar to (a) but with heel landing. Although the authors reported no absolute strain values...diameter of sensors (or, in the case of a rectangular sensor, width as measured along pin axis). Worst case : Strike line from inside edges of sensors...potoroo it is just prior to "toe strike ". The locomotion of the potoroo is described as digitigrade, unlike humans, who walk in a plantigrade manner

  2. Space Based Intelligence, Surveillance, and Reconnaissance Contribution to Global Strike in 2035

    DTIC Science & Technology

    2012-02-15

    include using high altitude air platforms and airships as a short-term solution, and small satellites with an Operationally Responsive Space (ORS) launch...irreversible threats, along with a worst case scenario. Section IV provides greater detail of the high altitude air platform, airship , and commercial space...Resultantly, the U.S. could use high altitude air platforms, airships , and cyber to complement its space systems in case of denial, degradation, or

  3. Managing in a New Reality

    ERIC Educational Resources Information Center

    Goldstein, Philip J.

    2009-01-01

    The phrase "worst since the Great Depression" has seemingly punctuated every economic report. The United States is experiencing the worst housing market, the worst unemployment level, and the worst drop in gross domestic product since the Great Depression. Although the steady drumbeat of bad news may have made everyone nearly numb, one…

  4. Exact solutions for species tree inference from discordant gene trees.

    PubMed

    Chang, Wen-Chieh; Górecki, Paweł; Eulenstein, Oliver

    2013-10-01

    Phylogenetic analysis has to overcome the grant challenge of inferring accurate species trees from evolutionary histories of gene families (gene trees) that are discordant with the species tree along whose branches they have evolved. Two well studied approaches to cope with this challenge are to solve either biologically informed gene tree parsimony (GTP) problems under gene duplication, gene loss, and deep coalescence, or the classic RF supertree problem that does not rely on any biological model. Despite the potential of these problems to infer credible species trees, they are NP-hard. Therefore, these problems are addressed by heuristics that typically lack any provable accuracy and precision. We describe fast dynamic programming algorithms that solve the GTP problems and the RF supertree problem exactly, and demonstrate that our algorithms can solve instances with data sets consisting of as many as 22 taxa. Extensions of our algorithms can also report the number of all optimal species trees, as well as the trees themselves. To better asses the quality of the resulting species trees that best fit the given gene trees, we also compute the worst case species trees, their numbers, and optimization score for each of the computational problems. Finally, we demonstrate the performance of our exact algorithms using empirical and simulated data sets, and analyze the quality of heuristic solutions for the studied problems by contrasting them with our exact solutions.

  5. Robustness Recipes for Minimax Robust Optimization in Intensity Modulated Proton Therapy for Oropharyngeal Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voort, Sebastian van der; Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft; Water, Steven van de

    Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of themore » simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can be used in minimax robust optimization of IMPT treatment plans to ensure adequate CTV coverage for oropharyngeal cancer patients.« less

  6. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    NASA Astrophysics Data System (ADS)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.

  7. The lionfish Pterois sp. invasion: Has the worst-case scenario come to pass?

    PubMed

    Côté, I M; Smith, N S

    2018-03-01

    This review revisits the traits thought to have contributed to the success of Indo-Pacific lionfish Pterois sp. as an invader in the western Atlantic Ocean and the worst-case scenario about their potential ecological effects in light of the more than 150 studies conducted in the past 5 years. Fast somatic growth, resistance to parasites, effective anti-predator defences and an ability to circumvent predator recognition mechanisms by prey have probably contributed to rapid population increases of lionfish in the invaded range. However, evidence that lionfish are strong competitors is still ambiguous, in part because demonstrating competition is challenging. Geographic spread has likely been facilitated by the remarkable capacity of lionfish for prolonged fasting in combination with other broad physiological tolerances. Lionfish have had a large detrimental effect on native reef-fish populations in the northern part of the invaded range, but similar effects have yet to be seen in the southern Caribbean. Most other envisaged direct and indirect consequences of lionfish predation and competition, even those that might have been expected to occur rapidly, such as shifts in benthic composition, have yet to be realized. Lionfish populations in some of the first areas invaded have started to decline, perhaps as a result of resource depletion or ongoing fishing and culling, so there is hope that these areas have already experienced the worst of the invasion. In closing, we place lionfish in a broader context and argue that it can serve as a new model to test some fundamental questions in invasion ecology. © 2018 The Fisheries Society of the British Isles.

  8. Costs and cost-effectiveness of 9-valent human papillomavirus (HPV) vaccination in two East African countries.

    PubMed

    Kiatpongsan, Sorapop; Kim, Jane J

    2014-01-01

    Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15-30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain.

  9. Costs and Cost-Effectiveness of 9-Valent Human Papillomavirus (HPV) Vaccination in Two East African Countries

    PubMed Central

    Kiatpongsan, Sorapop; Kim, Jane J.

    2014-01-01

    Background Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15–30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). Methods and Findings The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. Conclusions This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain. PMID:25198104

  10. A Systematic Review Comparing the Acceptability, Validity and Concordance of Discrete Choice Experiments and Best-Worst Scaling for Eliciting Preferences in Healthcare.

    PubMed

    Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia

    2018-06-01

    The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.

  11. Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation

    NASA Astrophysics Data System (ADS)

    Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro

    2018-02-01

    We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.

  12. Scheduling Independent Partitions in Integrated Modular Avionics Systems

    PubMed Central

    Du, Chenglie; Han, Pengcheng

    2016-01-01

    Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013

  13. Linear diffusion into a Faraday cage.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warne, Larry Kevin; Lin, Yau Tang; Merewether, Kimball O.

    2011-11-01

    Linear lightning diffusion into a Faraday cage is studied. An early-time integral valid for large ratios of enclosure size to enclosure thickness and small relative permeability ({mu}/{mu}{sub 0} {le} 10) is used for this study. Existing solutions for nearby lightning impulse responses of electrically thick-wall enclosures are refined and extended to calculate the nearby lightning magnetic field (H) and time-derivative magnetic field (HDOT) inside enclosures of varying thickness caused by a decaying exponential excitation. For a direct strike scenario, the early-time integral for a worst-case line source outside the enclosure caused by an impulse is simplified and numerically integrated tomore » give the interior H and HDOT at the location closest to the source as well as a function of distance from the source. H and HDOT enclosure response functions for decaying exponentials are considered for an enclosure wall of any thickness. Simple formulas are derived to provide a description of enclosure interior H and HDOT as well. Direct strike voltage and current bounds for a single-turn optimally-coupled loop for all three waveforms are also given.« less

  14. Status of the Correlation Process of the V-HAB Simulation with Ground Tests and ISS Telemetry Data

    NASA Technical Reports Server (NTRS)

    Ploetner, Peter; Anderson, Molly S.; Czupalla, Markus; Ewert, Micahel K.; Roth, Christof Martin; Zhulov, Anton

    2012-01-01

    The Virtual Habitat (V-HAB) is a dynamic Life Support System (LSS) simulation, created to investigate future human spaceflight missions. V-HAB provides the capability to optimize LSS during early design phases. Furthermore, it allows simulation of worst case scenarios which cannot be tested in reality. In a nutshell, the tool allows the testing of LSS robustness by means of computer simulations. V-HAB is a modular simulation consisting of a: 1. Closed Environment Module 2. Crew Module 3. Biological Module 4. Physio-Chemical Module The focus of the paper will be the correlation and validation of V-HAB against ground test and flight data. The ECLSS technologies (CDRA, CCAA, OGA, etc.) are correlated one by one against available ground test data, which is briefly described in this paper. The technology models in V-HAB are merged to simulate the ISS ECLSS. This simulation is correlated against telemetry data from the ISS, including the water recovery system and the air revitalization system. Finally, an analysis of the results is included in this paper.

  15. The minimum control authority of a system of actuators with applications to Gravity Probe-B

    NASA Technical Reports Server (NTRS)

    Wiktor, Peter; Debra, Dan

    1991-01-01

    The forcing capabilities of systems composed of many actuators are analyzed in this paper. Multiactuator systems can generate higher forces in some directions than in others. Techniques are developed to find the force in the weakest direction. This corresponds to the worst-case output and is defined as the 'minimum control authority'. The minimum control authority is a function of three things: the actuator configuration, the actuator controller and the way in which the output of the system is limited. Three output limits are studied: (1) fuel-flow rate, (2) power, and (3) actuator output. The three corresponding actuator controllers are derived. These controllers generate the desired force while minimizing either fuel flow rate, power or actuator output. It is shown that using the optimal controller can substantially increase the minimum control authority. The techniques for calculating the minimum control authority are applied to the Gravity Probe-B spacecraft thruster system. This example shows that the minimum control authority can be used to design the individual actuators, choose actuator configuration, actuator controller, and study redundancy.

  16. Fast marching methods for the continuous traveling salesman problem

    PubMed Central

    Andrews, June; Sethian, J. A.

    2007-01-01

    We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points (“cities”) in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M·N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh. PMID:17220271

  17. Fast marching methods for the continuous traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, J.; Sethian, J.A.

    We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ('cities') in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the Traveling Salesman Problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both amore » heuristic and an optimal solution to this problem. The order of the heuristic algorithm is at worst case M * N logN, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.« less

  18. Measured effects of wind turbine generation at the Block Island Power Company

    NASA Technical Reports Server (NTRS)

    Wilreker, V. F.; Smith, R. F.; Stiller, P. H.; Scot, G. W.; Shaltens, R. K.

    1984-01-01

    Data measurements made on the NASA MOD-OA 200-kw wind-turbine generator (WTG) installed on a utility grid form the basis for an overall performance analysis. Fuel displacement/-savings, dynamic interactions, and WTG excitation (reactive-power) control effects are studied. Continuous recording of a large number of electrical and mechanical variables on FM magnetic tape permit evaluation and correlation of phenomena over a bandwidth of at least 20 Hz. Because the wind-power penetration reached peaks of 60 percent, the impact of wind fluctuation and wind-turbine/diesel-utility interaction is evaluated in a worst-case scenario. The speed-governor dynamics of the diesel units exhibited an underdamped response, and the utility operation procedures were not altered to optimize overall WTG/utility performance. Primary findings over the data collection period are: a calculated 6.7-percent reduction in fuel consumption while generating 11 percent of the total electrical energy; acceptable system voltage and frequency fluctuations with WTG connected; and applicability of WTG excitation schemes using voltage, power, or VARS as the controlled variable.

  19. 78 FR 53494 - Dam Safety Modifications at Cherokee, Fort Loudoun, Tellico, and Watts Bar Dams

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... fundamental part of this mission was the construction and operation of an integrated system of dams and... by the Federal Emergency Management Agency, TVA prepares for the worst case flooding event in order... appropriate best management practices during all phases of construction and maintenance associated with the...

  20. Task 1, Design Analysis Report: Pulsed plasma solid propellant microthruster for the synchronous meteorological satellite

    NASA Technical Reports Server (NTRS)

    Guman, W. J. (Editor)

    1971-01-01

    Thermal vacuum design supporting thruster tests indicate no problems under the worst case conditions of sink temperature and spin rate. The reliability of the system was calculated to be 0.92 for a five-year mission. Minus the main energy storage capacitor it is 0.98.

  1. 40 CFR 300.320 - General pattern of response.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...., substantial threat to the public health or welfare of the United States, worst case discharge) of the... private party efforts, and where the discharge does not pose a substantial threat to the public health or... 40 Protection of Environment 27 2010-07-01 2010-07-01 false General pattern of response. 300.320...

  2. Small Wars 2.0: A Working Paper on Land Force Planning After Iraq and Afghanistan

    DTIC Science & Technology

    2011-02-01

    official examination of future ground combat demands that look genetically distinct from those undertaken in the name of the WoT. The concept of...under the worst-case rubric but for very different reasons. The latter are small wars. However, that by no means aptly describes their size

  3. The +vbar breakout during approach to Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Dunham, Scott D.

    1993-01-01

    A set of burn profiles was developed to provide bounding jet firing histories for a +vbar breakout during approaches to Space Station Freedom. The delta-v sequences were designed to place the Orbiter on a safe trajectory under worst case conditions and to try to minimize plume impingement on Space Station Freedom structure.

  4. A Comparison of Learning Technologies for Teaching Spacecraft Software Development

    ERIC Educational Resources Information Center

    Straub, Jeremy

    2014-01-01

    The development of software for spacecraft represents a particular challenge and is, in many ways, a worst case scenario from a design perspective. Spacecraft software must be "bulletproof" and operate for extended periods of time without user intervention. If the software fails, it cannot be manually serviced. Software failure may…

  5. Providing Exemplars in the Learning Environment: The Case For and Against

    ERIC Educational Resources Information Center

    Newlyn, David

    2013-01-01

    Contemporary education has moved towards the requirement of express articulation of assessment criteria and standards in an attempt to provide legitimacy in the measurement of student performance/achievement. Exemplars are provided examples of best or worst practice in the educational environment, which are designed to assist students to increase…

  6. Ageing of Insensitive DNAN Based Melt-Cast Explosives

    DTIC Science & Technology

    2014-08-01

    diurnal cycle (representative of the MEAO climate). Analysis of the ingredient composition, sensitiveness, mechanical and thermal properties was...first test condition was chosen to provide a worst-case scenario. Analysis of the ingredient composition, theoretical maximum density, sensitiveness...5 4.1.1 ARX-4027 Ingredient Analysis .............................................................. 5 4.1.2 ARX-4028 Ingredient Analysis

  7. Power Analysis for Anticipated Non-Response in Randomized Block Designs

    ERIC Educational Resources Information Center

    Pustejovsky, James E.

    2011-01-01

    Recent guidance on the treatment of missing data in experiments advocates the use of sensitivity analysis and worst-case bounds analysis for addressing non-ignorable missing data mechanisms; moreover, plans for the analysis of missing data should be specified prior to data collection (Puma et al., 2009). While these authors recommend only that…

  8. 33 CFR 154.1120 - Operating restrictions and interim operating authorization.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Facility Operating in Prince William Sound, Alaska § 154.1120 Operating restrictions and interim operating authorization. (a) The owner or operator of a TAPAA facility may not operate in Prince William Sound, Alaska... practicable, a worst case discharge or a discharge of 200,000 barrels of oil, whichever is grater, in Prince...

  9. Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground

    ERIC Educational Resources Information Center

    Oberg, Gunilla

    2009-01-01

    Newcomers often underestimate the challenges of interdisciplinary work and, as a rule, do not spend sufficient time to allow them to overcome differences and create common ground, which in turn leads to frustration, unresolved conflicts, and, in the worst case scenario, discontinued work. The key to successful collaboration is to facilitate the…

  10. 76 FR 34799 - Permanent Dam Safety Modification at Cherokee, Fort Loudoun, Tellico, and Watts Bar Dams, TN

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ... notice is provided in accordance with the Council on Environmental Quality's regulations (40 CFR parts... interconnected, fabric-lined, sand-filled HESCO containers in order to safely pass predicted worst-case..., but will not necessarily be limited to, the potential impacts on water quality, aquatic and...

  11. Mixed-Age Grouping in Kindergarten: A Best Case Example of Developmentally Appropriate Practice or Horace Mann's Worst Nightmare?

    ERIC Educational Resources Information Center

    Tercek, Patricia M.

    This practicum study examined kindergarten teachers' perspectives regarding mixed-age groupings that included kindergarten students. The study focused on pedagogical reasons for using mixed-age grouping, ingredients necessary for successful implementation of a multiage program that includes kindergartners, and the perceived effects of a multiage…

  12. Case Study: POLYTECH High School, Woodside, Delaware.

    ERIC Educational Resources Information Center

    Southern Regional Education Board, Atlanta, GA.

    POLYTECH High School in Woodside, Delaware, has gone from being among the worst schools in the High Schools That Work (HSTW) network to among the best. Polytech, which is now a full-time technical high school, has improved its programs and outcomes by implementing a series of organizational, curriculum, teaching, guidance, and leadership changes,…

  13. Guide for Oxygen Component Qualification Tests

    NASA Technical Reports Server (NTRS)

    Bamford, Larry J.; Rucker, Michelle A.; Dobbin, Douglas

    1996-01-01

    Although oxygen is a chemically stable element, it is not shock sensitive, will not decompose, and is not flammable. Oxygen use therefore carries a risk that should never be overlooked, because oxygen is a strong oxidizer that vigorously supports combustion. Safety is of primary concern in oxygen service. To promote safety in oxygen systems, the flammability of materials used in them should be analyzed. At the NASA White Sands Test Facility (WSTF), we have performed configurational tests of components specifically engineered for oxygen service. These tests follow a detailed WSTF oxygen hazards analysis. The stated objective of the tests was to provide performance test data for customer use as part of a qualification plan for a particular component in a particular configuration, and under worst-case conditions. In this document - the 'Guide for Oxygen Component Qualification Tests' - we outline recommended test systems, and cleaning, handling, and test procedures that address worst-case conditions. It should be noted that test results apply specifically to: manual valves, remotely operated valves, check valves, relief valves, filters, regulators, flexible hoses, and intensifiers. Component systems are not covered.

  14. Evaluating predictors of lead exposure for activities disturbing materials painted with or containing lead using historic published data from U.S. workplaces.

    PubMed

    Locke, Sarah J; Deziel, Nicole C; Koh, Dong-Hee; Graubard, Barry I; Purdue, Mark P; Friesen, Melissa C

    2017-02-01

    We evaluated predictors of differences in published occupational lead concentrations for activities disturbing material painted with or containing lead in U.S. workplaces to aid historical exposure reconstruction. For the aforementioned tasks, 221 air and 113 blood lead summary results (1960-2010) were extracted from a previously developed database. Differences in the natural log-transformed geometric mean (GM) for year, industry, job, and other ancillary variables were evaluated in meta-regression models that weighted each summary result by its inverse variance and sample size. Air and blood lead GMs declined 5%/year and 6%/year, respectively, in most industries. Exposure contrast in the GMs across the nine jobs and five industries was higher based on air versus blood concentrations. For welding activities, blood lead GMs were 1.7 times higher in worst-case versus non-worst case scenarios. Job, industry, and time-specific exposure differences were identified; other determinants were too sparse or collinear to characterize. Am. J. Ind. Med. 60:189-197, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. A radiation briefer's guide to the PIKE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steadman, Jr, C R

    1990-03-01

    Gamma-radiation-exposure estimates to populations living immediately downwind from the Nevada Test Site have been required for many years by the US Department of Energy (DOE) before each containment-designed nuclear detonation. A highly unlikely worst-case'' scenario is utilized which assumes that there will be an accidental massive venting of radioactive debris into the atmosphere shortly after detonation. The Weather Service Nuclear Support Office (WSNSO) has supplied DOE with such estimates for the last 25 years using the WSNSO Fallout Scaling Technique (FOST), which employs a worst-case analog event that actually occurred in the past. The PIKE Model'' is the application ofmore » the FOST using the PIKE nuclear event as the analog. This report, which is primarily intended for WSNSO meteorologists who derive radiation estimates, gives a brief history of the model,'' presents the mathematical, radiological, and meteorological concepts upon which it is based, states its limitations, explains it apparent advantages over more sophisticated models, and details how it is used operationally. 10 refs., 31 figs., 7 tabs.« less

  16. Housing for the "Worst of the Worst" Inmates: Public Support for Supermax Prisons

    ERIC Educational Resources Information Center

    Mears, Daniel P.; Mancini, Christina; Beaver, Kevin M.; Gertz, Marc

    2013-01-01

    Despite concerns whether supermaximum security prisons violate human rights or prove effective, these facilities have proliferated in America over the past 25 years. This punishment--aimed at the "worst of the worst" inmates and involving 23-hr-per-day single-cell confinement with few privileges or services--has emerged despite little…

  17. Hydraulic Fracturing of Soils; A Literature Review.

    DTIC Science & Technology

    1977-03-01

    best case, or worst case. The study reported herein is an overview of one such test or technique, hydraulic fracturing , which is defined as the...formation of cracks, in soil by the application of hydraulic pressure greater than the minor principal stress at that point. Hydraulic fracturing , as a... hydraulic fracturing as a means for determination of lateral stresses, the technique can still be used for determining in situ total stress and permeability at a point in a cohesive soil.

  18. Health risk impacts analysis of fugitive aromatic compounds emissions from the working face of a municipal solid waste landfill in China.

    PubMed

    Liu, Yanjun; Liu, Yanting; Li, Hao; Fu, Xindi; Guo, Hanwen; Meng, Ruihong; Lu, Wenjing; Zhao, Ming; Wang, Hongtao

    2016-12-01

    Aromatic compounds (ACs) emitted from landfills have attracted a lot of attention of the public due to their adverse impacts on the environment and human health. This study assessed the health risk impacts of the fugitive ACs emitted from the working face of a municipal solid waste (MSW) landfill in China. The emission data was acquired by long-term in-situ samplings using a modified wind tunnel system. The uncertainty of aromatic emissions is determined by means of statistics and the emission factors were thus developed. Two scenarios, i.e. 'normal-case' and 'worst-case', were presented to evaluate the potential health risk in different weather conditions. For this typical large anaerobic landfill, toluene was the dominant species owing to its highest releasing rate (3.40±3.79g·m -2 ·d -1 ). Despite being of negligible non-carcinogenic risk, the ACs might bring carcinogenic risks to human in the nearby area. Ethylbenzene was the major health threat substance. The cumulative carcinogenic risk impact area is as far as ~1.5km at downwind direction for the normal-case scenario, and even nearly 4km for the worst-case scenario. Health risks of fugitive ACs emissions from active landfills should be concerned, especially for landfills which still receiving mixed MSW. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. "Best Case/Worst Case": Qualitative Evaluation of a Novel Communication Tool for Difficult in-the-Moment Surgical Decisions.

    PubMed

    Kruser, Jacqueline M; Nabozny, Michael J; Steffens, Nicole M; Brasel, Karen J; Campbell, Toby C; Gaines, Martha E; Schwarze, Margaret L

    2015-09-01

    To evaluate a communication tool called "Best Case/Worst Case" (BC/WC) based on an established conceptual model of shared decision-making. Focus group study. Older adults (four focus groups) and surgeons (two focus groups) using modified questions from the Decision Aid Acceptability Scale and the Decisional Conflict Scale to evaluate and revise the communication tool. Individuals aged 60 and older recruited from senior centers (n = 37) and surgeons from academic and private practices in Wisconsin (n = 17). Qualitative content analysis was used to explore themes and concepts that focus group respondents identified. Seniors and surgeons praised the tool for the unambiguous illustration of multiple treatment options and the clarity gained from presentation of an array of treatment outcomes. Participants noted that the tool provides an opportunity for in-the-moment, preference-based deliberation about options and a platform for further discussion with other clinicians and loved ones. Older adults worried that the format of the tool was not universally accessible for people with different educational backgrounds, and surgeons had concerns that the tool was vulnerable to physicians' subjective biases. The BC/WC tool is a novel decision support intervention that may help facilitate difficult decision-making for older adults and their physicians when considering invasive, acute medical treatments such as surgery. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

  20. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  1. Primary Spinal Cord Melanoma: A Case Report and a Systemic Review of Overall Survival.

    PubMed

    Zhang, Mingzhe; Liu, Raynald; Xiang, Yi; Mao, Jianhui; Li, Guangjie; Ma, Ronghua; Sun, Zhaosheng

    2018-06-01

    The incidence of primary spinal cord melanoma (PSCM) is rare. Several case series and case reports have been published in the literature. However, the predictive factors of PSCM survival and management options are not discussed in detail. We present a case of PSCM; total resection was achieved and chemotherapy was given postoperatively. A comprehensive search was performed on PubMed's electronic database using the words "primary spinal cord melanoma." Survival rates with various gender, location, treatment, and metastasis condition were collected from the published articles and analyzed. Fifty nine cases were eligible for the survival analysis; 54% were male and 46% were female. Patient sex did not influence overall survival. The most common location was the thorax. Patient sex and tumor location did not influence overall survival. The major presenting symptoms were weakness and paresthesia of the extremities. Metastasis or dissemination was noted in 45.16% of 31 patients. In the Kaplan-Meier survival analysis, patients who had metastasis had the worst prognosis. Extent of resection was not related to mortality. Patients who received surgery and surgery with adjuvant therapy had a better median survival than did those who had adjuvant therapy alone. Prognosis was worst in those patients who underwent only adjuvant therapy without surgery (5 months). Surgery is the first treatment of choice in treating PSCM. The goal of tumor resection is to reduce symptoms. Adjuvant therapy after surgery had a beneficial effect on limiting the metastasis. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin in a group of Swedish diabetics.

    PubMed

    Ilbäck, N-G; Alzin, M; Jahrl, S; Enghardt-Barbieri, H; Busk, L

    2003-02-01

    Few sweetener intake studies have been performed on the general population and only one study has been specifically designed to investigate diabetics and children. This report describes a Swedish study on the estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin by children (0-15 years) and adult male and female diabetics (types I and II) of various ages (16-90 years). Altogether, 1120 participants were asked to complete a questionnaire about their sweetener intake. The response rate (71%, range 59-78%) was comparable across age and gender groups. The most consumed 'light' foodstuffs were diet soda, cider, fruit syrup, table powder, table tablets, table drops, ice cream, chewing gum, throat lozenges, sweets, yoghurt and vitamin C. The major sources of sweetener intake were beverages and table powder. About 70% of the participants, equally distributed across all age groups, read the manufacturer's specifications of the food products' content. The estimated intakes showed that neither men nor women exceeded the ADI for acesulfame-K; however, using worst-case calculations, high intakes were found in young children (169% of ADI). In general, the aspartame intake was low. Children had the highest estimated (worst case) intake of cyclamate (317% of ADI). Children's estimated intake of saccharin only slightly exceeded the ADI at the 5% level for fruit syrup. Children had an unexpected high intake of tabletop sweeteners, which, in Sweden, is normally based on cyclamate. The study was performed during two winter months when it can be assumed that the intake of sweeteners was lower as compared with during warm, summer months. Thus, the present study probably underestimates the average intake on a yearly basis. However, our worst-case calculations based on maximum permitted levels were performed on each individual sweetener, although exposure is probably relatively evenly distributed among all sweeteners, except for cyclamate containing table sweeteners.

  3. Dispelling urban myths about default uncertainty factors in chemical risk assessment – sufficient protection against mixture effects?

    PubMed Central

    2013-01-01

    Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. PMID:23816180

  4. Dispelling urban myths about default uncertainty factors in chemical risk assessment--sufficient protection against mixture effects?

    PubMed

    Martin, Olwenn V; Martin, Scholze; Kortenkamp, Andreas

    2013-07-01

    Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment.

  5. Relaxing USOS Solar Array Constraints for Russian Vehicle Undocking

    NASA Technical Reports Server (NTRS)

    Menkin, Evgeny; Schrock, Mariusz; Schrock, Rita; Zaczek, Mariusz; Gomez, Susan; Lee, Roscoe; Bennet, George

    2011-01-01

    With the retirement of Space Shuttle cargo delivery capability and the ten year life extension of the International Space Station (ISS) more emphasis is being put on preservation of the service life of ISS critical components. Current restrictions on the United States Orbital Segment (USOS) Solar Array (SA) positioning during Russian Vehicle (RV) departure from ISS nadir and zenith ports cause SA to be positioned in the plume field of Service Module thrusters and lead to degradation of SAs as well as potential damage to Sun tracking Beta Gimbal Assemblies (BGA). These restrictions are imposed because of the single fault tolerant RV Motion Control System (MCS), which does not meet ISS Safety requirements for catastrophic hazards and dictates 16 degree Solar Array Rotary Joint position, which ensures that ISS and RV relative motion post separation, does lead to collision. The purpose of this paper is to describe a methodology and the analysis that was performed to determine relative motion trajectories of the ISS and separating RV for nominal and contingency cases. Analysis was performed in three phases that included ISS free drift prior to Visiting Vehicle separation, ISS and Visiting Vehicle relative motion analysis and clearance analysis. First, the ISS free drift analysis determined the worst case attitude and attitude rate excursions prior to RV separation based on a series of different configurations and mass properties. Next, the relative motion analysis calculated the separation trajectories while varying the initial conditions, such as docking mechanism performance, Visiting Vehicle MCS failure, departure port location, ISS attitude and attitude rates at the time of separation, etc. The analysis employed both orbital mechanics and rigid body rotation calculations while accounting for various atmospheric conditions and gravity gradient effects. The resulting relative motion trajectories were then used to determine the worst case separation envelopes during the clearance analysis. Analytical models were developed individually for each stage and the results were used to build initial conditions for the following stages. In addition to the analysis approach, this paper also discusses the analysis results, showing worst case relative motion envelopes, the recommendations for ISS appendage positioning and the suggested approach for future analyses.

  6. Modelling Long Term Disability following Injury: Comparison of Three Approaches for Handling Multiple Injuries

    PubMed Central

    Gabbe, Belinda J.; Harrison, James E.; Lyons, Ronan A.; Jolley, Damien

    2011-01-01

    Background Injury is a leading cause of the global burden of disease (GBD). Estimates of non-fatal injury burden have been limited by a paucity of empirical outcomes data. This study aimed to (i) establish the 12-month disability associated with each GBD 2010 injury health state, and (ii) compare approaches to modelling the impact of multiple injury health states on disability as measured by the Glasgow Outcome Scale – Extended (GOS-E). Methods 12-month functional outcomes for 11,337 survivors to hospital discharge were drawn from the Victorian State Trauma Registry and the Victorian Orthopaedic Trauma Outcomes Registry. ICD-10 diagnosis codes were mapped to the GBD 2010 injury health states. Cases with a GOS-E score >6 were defined as “recovered.” A split dataset approach was used. Cases were randomly assigned to development or test datasets. Probability of recovery for each health state was calculated using the development dataset. Three logistic regression models were evaluated: a) additive, multivariable; b) “worst injury;” and c) multiplicative. Models were adjusted for age and comorbidity and investigated for discrimination and calibration. Findings A single injury health state was recorded for 46% of cases (1–16 health states per case). The additive (C-statistic 0.70, 95% CI: 0.69, 0.71) and “worst injury” (C-statistic 0.70; 95% CI: 0.68, 0.71) models demonstrated higher discrimination than the multiplicative (C-statistic 0.68; 95% CI: 0.67, 0.70) model. The additive and “worst injury” models demonstrated acceptable calibration. Conclusions The majority of patients survived with persisting disability at 12-months, highlighting the importance of improving estimates of non-fatal injury burden. Additive and “worst” injury models performed similarly. GBD 2010 injury states were moderately predictive of recovery 1-year post-injury. Further evaluation using additional measures of health status and functioning and comparison with the GBD 2010 disability weights will be needed to optimise injury states for future GBD studies. PMID:21984951

  7. Sparking-out optimization while surface grinding aluminum alloy 1933T2 parts using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Soler, Ya I.; Salov, V. M.; Kien Nguyen, Chi

    2018-03-01

    The article presents the results of a search for optimal sparing-out strokes when surface grinding aluminum parts by high-porous wheels Norton of black silicon carbide 37C80K12VP using fuzzy logic. The topography of grinded surface is evaluated according to the following parameters: roughness – Ra, Rmax, Sm; indicators of flatness deviation – EFEmax, EFEa, EFEq; microhardness HV, each of these parameters is represented by two measures of position and dispersion. The simulation results of fuzzy logic in the Matlab medium establish that during the grinding of alloy 1933T2, the best integral performance evaluation of sparking-out was given to two double-strokes (d=0.827) and the worst – to three ones (d=0.405).

  8. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  9. Cyber-security Considerations for Real-Time Physiological Status Monitoring: Threats, Goals, and Use Cases

    DTIC Science & Technology

    2016-11-01

    low- power RF transmissions used by the OBAN system. B. Threat Analysis Methodology To analyze the risk presented by a particular threat we use a... power efficiency5 and in the absolute worst case a compromise of the wireless channel could result in death. Fitness trackers on the other hand are...analysis is intended to inform the development of secure RT-PSM architectures. I. INTRODUCTION The development of very low- power computing devices and

  10. Considering the worst-case metabolic scenario, but training to the typical-case competitive scenario: response to Amtmann (2012).

    PubMed

    Del Vecchio, Fabrício Boscolo; Franchini, Emerson

    2013-08-01

    This response to Amtmann's letter emphasizes that the knowledge of the typical time structure, as well as its variation, together with the main goal of the mixed martial arts athletes--to win by knock out or submission--need to be properly considered during the training sessions. Example with other combat sports are given and discussed, especially concerning the importance of adapting the physical conditioning workouts to the technical-tactical profile of the athlete and not the opposite.

  11. The reduction of a ""safety catastrophic'' potential hazard: A case history

    NASA Technical Reports Server (NTRS)

    Jones, J. P.

    1971-01-01

    A worst case analysis is reported on the safety of time watch movements for triggering explosive packages on the lunar surface in an experiment to investigate physical lunar structural characteristics through induced seismic energy waves. Considered are the combined effects of low pressure, low temperature, lunar gravity, gear train error, and position. Control measures constitute a seal control cavity and design requirements to prevent overbanking in the mainspring torque curve. Thus, the potential hazard is reduced to safety negligible.

  12. 33 CFR 154.1035 - Specific requirements for facilities that could reasonably be expected to cause significant and...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... spill mitigation procedures. (i) This subsection must describe the volume(s) and oil groups that would... applicable, the worst case discharge from the non-transportation-related facility. This must be the same volume provided in the response plan for the non-transportation-related facility. (ii) This subsection...

  13. Effects of forcing uncertainties in the improvement skills of assimilating satellite soil moisture retrievals into flood forecasting models

    USDA-ARS?s Scientific Manuscript database

    Floods have negative impacts on society, causing damages in infrastructures and industry, and in the worst cases, causing loss of human lives. Thus early and accurate warning is crucial to significantly reduce the impacts on public safety and economy. Reliable flood warning can be generated using ...

  14. Evaluation of Bias Correction Methods for "Worst-case" Selective Non-participation in NAEP

    ERIC Educational Resources Information Center

    McLaughlin, Don; Gallagher, Larry; Stancavage, Fran

    2004-01-01

    With the advent of No Child Left Behind (NCLB), the context for NAEP participation is changing. Whereas in the past participation in NAEP has always been voluntary, participation is now mandatory for some grade and subjects among schools receiving Title I funds. While this will certainly raise school-level participation rates in the mandated…

  15. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...

  16. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...

  17. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...

  18. 40 CFR 57.405 - Formulation, approval, and implementation of requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... study shall be submitted after the end of the worst case three-month period as a part of the next semi... study demonstrating that the SCS will prevent violations of the NAAQS in the smelter's DLA at all times. The reliability study shall include a comprehensive analysis of the system's operation during one or...

  19. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different

  20. CTE Delivers Leaders

    ERIC Educational Resources Information Center

    Magnuson, Peter

    2013-01-01

    Imagine a football team without a quarterback. Imagine a ship without a captain. Imagine a kitchen without a chef. Right now, you are probably running a number of worst-case scenarios through your head: a losing season; a ship adrift; some not-so-tasty cookies. The reason your mind has conjured up these end results is because in each of these…

  1. The Mayors' Charter Schools

    ERIC Educational Resources Information Center

    Magee, Michael

    2014-01-01

    In 2007, the case could be made that Rhode Island had, dollar for dollar, the worst-performing public education system in the United States. Despite per-pupil expenditures ranking in the top 10 nationally, the state's 8th graders fared no better than 40th in reading and 33rd in math on the National Assessment of Educational Progress (NAEP). Only…

  2. Improved Multiple-Species Cyclotron Ion Source

    NASA Technical Reports Server (NTRS)

    Soli, George A.; Nichols, Donald K.

    1990-01-01

    Use of pure isotope 86Kr instead of natural krypton in multiple-species ion source enables source to produce krypton ions separated from argon ions by tuning cylcotron with which source used. Addition of capability to produce and separate krypton ions at kinetic energies of 150 to 400 MeV necessary for simulation of worst-case ions occurring in outer space.

  3. States' Fiscal Woes Raise Anxiety Level on School Budgets

    ERIC Educational Resources Information Center

    Zehr, Mary Ann

    2007-01-01

    A long-projected revenue chill is beginning to bite in a number of states, putting pressure on education policymakers to defend existing programs--and, in some cases, forcing them to prepare for the worst if budget cuts become a reality. The causes vary, from slack property-tax receipts in Florida to a chronically sluggish economy in Michigan. But…

  4. Estimation of human damage and economic loss of buildings for the worst-credible scenario of tsunami inundation in the city of Augusta, Italy

    NASA Astrophysics Data System (ADS)

    Pagnoni, Gianluca; Tinti, Stefano

    2017-04-01

    The city of Augusta is located in the southern part of the eastern coast of Sicily. Italian tsunami catalogue and paleo-tsunami surveys indicate that at least 7 events of tsunami affected the bay of Augusta in the last 4,000 years, two of which are associated with earthquakes (1169 and 1693) that destroyed the city. For these reasons Augusta has been chosen in the project ASTARTE as a test site for the study of issues related to tsunami hazard and risk. In last two years we studied hazard through the approach of the worst-case credible scenario and carried out vulnerability and damage analysis for buildings. In this work, we integrate that research, and estimate the damage to people and the economic loss of buildings due to structural damage. As regards inundation, we assume both uniform inundation levels (bath-tub hypothesis) and inundation data resulting from the worst-case scenario elaborated for the area by Armigliato et al. (2015). Human damage is calculated in three steps using the method introduced by Pagnoni et al. (2016) following the work by Terrier et al. (2012) and by Koshimura et al. (2009). First, we use census data to estimate the number of people present in each residential building affected by inundation; second, based on water column depth and building type, we evaluate the level of damage to people; third, we provide an estimate of fatalities. The economic loss is computed for two types of buildings (residential and trade-industrial) by using data on inundation and data from the real estate market. This study was funded by the EU Project ASTARTE - "Assessment, STrategy And Risk Reduction for Tsunamis in Europe", Grant 603839, 7th FP (ENV.2013.6.4-3)

  5. Best-worst scaling to assess the most important barriers and facilitators for the use of health technology assessment in Austria.

    PubMed

    Feig, Chiara; Cheung, Kei Long; Hiligsmann, Mickaël; Evers, Silvia M A A; Simon, Judit; Mayer, Susanne

    2018-04-01

    Although Health Technology Assessment (HTA) is increasingly used to support evidence-based decision-making in health care, several barriers and facilitators for the use of HTA have been identified. This best-worst scaling (BWS) study aims to assess the relative importance of selected barriers and facilitators of the uptake of HTA studies in Austria. A BWS object case survey was conducted among 37 experts in Austria to assess the relative importance of HTA barriers and facilitators. Hierarchical Bayes estimation was applied, with the best-worst count analysis as sensitivity analysis. Subgroup analyses were also performed on professional role and HTA experience. The most important barriers were 'lack of transparency in the decision-making process', 'fragmentation', 'absence of appropriate incentives', 'no explicit framework for decision-making process', and 'insufficient legal support'. The most important facilitators were 'transparency in the decision-making process', 'availability of relevant HTA research for policy makers', 'availability of explicit framework for decision-making process', 'sufficient legal support', and 'appropriate incentives'. This study suggests that HTA barriers and facilitators related to the context of decision makers, especially 'policy characteristics' and 'organization and resources' are the most important in Austria. A transparent and participatory decision-making process could improve the adoption of HTA evidence.

  6. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    PubMed

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  7. Use of Medicare claims to rank hospitals by surgical site infection risk following coronary artery bypass graft surgery.

    PubMed

    Huang, Susan S; Placzek, Hilary; Livingston, James; Ma, Allen; Onufrak, Fallon; Lankiewicz, Julie; Kleinman, Ken; Bratzler, Dale; Olsen, Margaret A; Lyles, Rosie; Khan, Yosef; Wright, Paula; Yokoe, Deborah S; Fraser, Victoria J; Weinstein, Robert A; Stevenson, Kurt; Hooper, David; Vostok, Johanna; Datta, Rupak; Nsa, Wato; Platt, Richard

    2011-08-01

    To evaluate whether longitudinal insurer claims data allow reliable identification of elevated hospital surgical site infection (SSI) rates. We conducted a retrospective cohort study of Medicare beneficiaries who underwent coronary artery bypass grafting (CABG) in US hospitals performing at least 80 procedures in 2005. Hospitals were assigned to deciles by using case mix-adjusted probabilities of having an SSI-related inpatient or outpatient claim code within 60 days of surgery. We then reviewed medical records of randomly selected patients to assess whether chart-confirmed SSI risk was higher in hospitals in the worst deciles compared with the best deciles. Fee-for-service Medicare beneficiaries who underwent CABG in these hospitals in 2005. We evaluated 114,673 patients who underwent CABG in 671 hospitals. In the best decile, 7.8% (958/12,307) of patients had an SSI-related code, compared with 24.8% (2,747/11,068) in the worst decile ([Formula: see text]). Medical record review confirmed SSI in 40% (388/980) of those with SSI-related codes. In the best decile, the chart-confirmed annual SSI rate was 3.2%, compared with 9.4% in the worst decile, with an adjusted odds ratio of SSI of 2.7 (confidence interval, 2.2-3.3; [Formula: see text]) for CABG performed in a worst-decile hospital compared with a best-decile hospital. Claims data can identify groups of hospitals with unusually high or low post-CABG SSI rates. Assessment of claims is more reproducible and efficient than current surveillance methods. This example of secondary use of routinely recorded electronic health information to assess quality of care can identify hospitals that may benefit from prevention programs.

  8. Battery management systems (BMS) optimization for electric vehicles (EVs) in Malaysia

    NASA Astrophysics Data System (ADS)

    Salehen, P. M. W.; Su'ait, M. S.; Razali, H.; Sopian, K.

    2017-04-01

    Following the UN Climate Change Conference 2009 in Copenhagen, Denmark, Malaysia seriously committed on "Go Green" campaign with the aim to reduce 40% GHG emission by the year 2020. Therefore, the National Green Technology Policy has been legalised in 2009 with transportation as one of its focused sectors, which include hybrid (HEVs), electric vehicles (EVs) and fuel cell vehicles with the purpose of to keep up with the worst scenario. While the number of registered cars has been increasing by 1 million yearly, the amount has doubled in the last two decades. Consequently, CO2 emission in Malaysia reaches up to 97.1% and will continue to increase mainly due to the activities in the transportation sector. Nevertheless, Malaysia is now moving towards on green car which battery-based EVs. This type of transportation mainly needs power performance optimization, which is controlled by the Batteries Management System (BMS). BMS is an essential module which leads to reliable power management, optimal power performance and safe vehicle that lead back for power optimization in EVs. Thus, this paper proposes power performance optimization for various setups of lithium-ion cathode with graphene anode using MATLAB/SIMULINK software for better management performance and extended EVs driving range.

  9. Case series: Two cases of eyeball tattoos with short-term complications.

    PubMed

    Duarte, Gonzalo; Cheja, Rashel; Pachón, Diana; Ramírez, Carolina; Arellanes, Lourdes

    2017-04-01

    To report two cases of eyeball tattoos with short-term post procedural complications. Case 1 is a 26-year-old Mexican man that developed orbital cellulitis and posterior scleritis 2 h after an eyeball tattoo. Patient responded satisfactorily to systemic antibiotic and corticosteroid treatment. Case 2 is a 17-year-old Mexican man that developed two sub-episcleral nodules in the ink injection sites immediately after the procedure. Eyeball tattoos are performed by non-ophthalmic trained personnel. There are a substantial number of short-term risks associated with this procedure. Long-term effects on the eyes and vision are still unknown, but in a worst case scenario could include loss of vision or permanent damage to the eyes.

  10. Tailor-made heart simulation predicts the effect of cardiac resynchronization therapy in a canine model of heart failure.

    PubMed

    Panthee, Nirmal; Okada, Jun-ichi; Washio, Takumi; Mochizuki, Youhei; Suzuki, Ryohei; Koyama, Hidekazu; Ono, Minoru; Hisada, Toshiaki; Sugiura, Seiryo

    2016-07-01

    Despite extensive studies on clinical indices for the selection of patient candidates for cardiac resynchronization therapy (CRT), approximately 30% of selected patients do not respond to this therapy. Herein, we examined whether CRT simulations based on individualized realistic three-dimensional heart models can predict the therapeutic effect of CRT in a canine model of heart failure with left bundle branch block. In four canine models of failing heart with dyssynchrony, individualized three-dimensional heart models reproducing the electromechanical activity of each animal were created based on the computer tomographic images. CRT simulations were performed for 25 patterns of three ventricular pacing lead positions. Lead positions producing the best and the worst therapeutic effects were selected in each model. The validity of predictions was tested in acute experiments in which hearts were paced from the sites identified by simulations. We found significant correlations between the experimentally observed improvement in ejection fraction (EF) and the predicted improvements in ejection fraction (P<0.01) or the maximum value of the derivative of left ventricular pressure (P<0.01). The optimal lead positions produced better outcomes compared with the worst positioning in all dogs studied, although there were significant variations in responses. Variations in ventricular wall thickness among the dogs may have contributed to these responses. Thus CRT simulations using the individualized three-dimensional heart models can predict acute hemodynamic improvement, and help determine the optimal positions of the pacing lead. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Modelling of occupational respirable crystalline silica exposure for quantitative exposure assessment in community-based case-control studies.

    PubMed

    Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans

    2011-11-01

    We describe an empirical model for exposure to respirable crystalline silica (RCS) to create a quantitative job-exposure matrix (JEM) for community-based studies. Personal measurements of exposure to RCS from Europe and Canada were obtained for exposure modelling. A mixed-effects model was elaborated, with region/country and job titles as random effect terms. The fixed effect terms included year of measurement, measurement strategy (representative or worst-case), sampling duration (minutes) and a priori exposure intensity rating for each job from an independently developed JEM (none, low, high). 23,640 personal RCS exposure measurements, covering a time period from 1976 to 2009, were available for modelling. The model indicated an overall downward time trend in RCS exposure levels of -6% per year. Exposure levels were higher in the UK and Canada, and lower in Northern Europe and Germany. Worst-case sampling was associated with higher reported exposure levels and an increase in sampling duration was associated with lower reported exposure levels. Highest predicted RCS exposure levels in the reference year (1998) were for chimney bricklayers (geometric mean 0.11 mg m(-3)), monument carvers and other stone cutters and carvers (0.10 mg m(-3)). The resulting model enables us to predict time-, job-, and region/country-specific exposure levels of RCS. These predictions will be used in the SYNERGY study, an ongoing pooled multinational community-based case-control study on lung cancer.

  12. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  13. A randomised controlled trial of three or one breathing technique training sessions for breathlessness in people with malignant lung disease.

    PubMed

    Johnson, Miriam J; Kanaan, Mona; Richardson, Gerry; Nabb, Samantha; Torgerson, David; English, Anne; Barton, Rachael; Booth, Sara

    2015-09-07

    About 90 % of patients with intra-thoracic malignancy experience breathlessness. Breathing training is helpful, but it is unknown whether repeated sessions are needed. The present study aims to test whether three sessions are better than one for breathlessness in this population. This is a multi-centre randomised controlled non-blinded parallel arm trial. Participants were allocated to three sessions or single (1:2 ratio) using central computer-generated block randomisation by an independent Trials Unit and stratified for centre. The setting was respiratory, oncology or palliative care clinics at eight UK centres. Inclusion criteria were people with intrathoracic cancer and refractory breathlessness, expected prognosis ≥3 months, and no prior experience of breathing training. The trial intervention was a complex breathlessness intervention (breathing training, anxiety management, relaxation, pacing, and prioritisation) delivered over three hour-long sessions at weekly intervals, or during a single hour-long session. The main primary outcome was worst breathlessness over the previous 24 hours ('worst'), by numerical rating scale (0 = none; 10 = worst imaginable). Our primary analysis was area under the curve (AUC) 'worst' from baseline to 4 weeks. All analyses were by intention to treat. Between April 2011 and October 2013, 156 consenting participants were randomised (52 three; 104 single). Overall, the 'worst' score reduced from 6.81 (SD, 1.89) to 5.84 (2.39). Primary analysis [n = 124 (79 %)], showed no between-arm difference in the AUC: three sessions 22.86 (7.12) vs single session 22.58 (7.10); P value = 0.83); mean difference 0.2, 95 % CIs (-2.31 to 2.97). Complete case analysis showed a non-significant reduction in QALYs with three sessions (mean difference -0.006, 95 % CIs -0.018 to 0.006). Sensitivity analyses found similar results. The probability of the single session being cost-effective (threshold value of £20,000 per QALY) was over 80 %. There was no evidence that three sessions conferred additional benefits, including cost-effectiveness, over one. A single session of breathing training seems appropriate and minimises patient burden. Registry: ISRCTN; ISRCTN49387307; http://www.isrctn.com/ISRCTN49387307 ; registration date: 25/01/2011.

  14. Tracing the transition path between optimal strategies combinations within a competitive market of innovative industrial products

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Pollalis, Yannis A.

    2012-12-01

    In several cases, a competitive market can be simulated by a game, where each company/opponent is referred to as a player. In order to accommodate the fact that each player (alone or with alliances) is working against some others' interest, the rather conservative maximin criterion is frequently used for selecting the strategy or the combination of strategies that yield the best of the worst possible outcomes for each one of the players. Under this criterion, an optimal solution is obtained when neither player finds it beneficial to alter his strategy, which means that an equilibrium has been achieved, giving also the value of the game. If conditions change as regards a player, e.g., because of either achieving an unexpected successful result in developing an innovative industrial product or obtaining higher liquidity permitting him to increase advertisement in order to acquire a larger market share, then a new equilibrium is reached. The identification of the path between the old and the new equilibrium points may prove to be valuable for investigating the robustness of the solution by means of sensitivity analysis, since uncertainty plays a critical role in this situation, where evaluation of the payoff matrix is usually based on experts' estimates. In this work, the development of a standard methodology (including 16 activity stages and 7 decision nodes) for tracing this path is presented while a numerical implementation follows to prove its functionality.

  15. Optimization of a secondary VOI protocol for lung imaging in a clinical CT scanner.

    PubMed

    Larsen, Thomas C; Gopalakrishnan, Vissagan; Yao, Jianhua; Nguyen, Catherine P; Chen, Marcus Y; Moss, Joel; Wen, Han

    2018-05-21

    We present a solution to meet an unmet clinical need of an in-situ "close look" at a pulmonary nodule or at the margins of a pulmonary cyst revealed by a primary (screening) chest CT while the patient is still in the scanner. We first evaluated options available on current whole-body CT scanners for high resolution screening scans, including ROI reconstruction of the primary scan data and HRCT, but found them to have insufficient SNR in lung tissue or discontinuous slice coverage. Within the capabilities of current clinical CT systems, we opted for the solution of a secondary, volume-of-interest (VOI) protocol where the radiation dose is focused into a short-beam axial scan at the z position of interest, combined with a small-FOV reconstruction at the xy position of interest. The objective of this work was to design a VOI protocol that is optimized for targeted lung imaging in a clinical whole-body CT system. Using a chest phantom containing a lung-mimicking foam insert with a simulated cyst, we identified the appropriate scan mode and optimized both the scan and recon parameters. The VOI protocol yielded 3.2 times the texture amplitude-to-noise ratio in the lung-mimicking foam when compared to the standard chest CT, and 8.4 times the texture difference between the lung mimicking and reference foams. It improved details of the wall of the simulated cyst and better resolution in a line-pair insert. The Effective Dose of the secondary VOI protocol was 42% on average and up to 100% in the worst-case scenario of VOI positioning relative to the standard chest CT. The optimized protocol will be used to obtain detailed CT textures of pulmonary lesions, which are biomarkers for the type and stage of lung diseases. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  16. Contrasting SWAT predictions of watershed-level streamflow and nutrient loss resulting from static versus dynamic atmospheric CO2 inputs

    USDA-ARS?s Scientific Manuscript database

    Past climate observations have indicated a rapid increase in global atmospheric CO2 concentration during late 20th century (13 ppm/decade), and models project further rise throughout the 21st century (24 ppm/decade and 69 ppm/decade in the best and worst case scenario, respectively). We modified SWA...

  17. Sports Competition--Integrated or Segregated? Which Is Better for Your Child?

    ERIC Educational Resources Information Center

    Grosse, Susan J.

    2008-01-01

    Selecting competitive sports opportunities for a child is a challenging process. Parents have to make the right choices so that their young athletes will have many years of healthy, happy, active experiences. If parents make the wrong choices, their son or daughter will have, at the very least, a few unhappy hours, and worst-case scenario, could…

  18. Responding to Disaster with a Service Learning Project for Honors Students

    ERIC Educational Resources Information Center

    Yoder, Stephen A.

    2013-01-01

    On Thursday, April 27, 2011, one of the worst natural disasters in the history of Alabama struck in the form of ferocious tornadoes touching down in various parts of the state. The dollar amount of the property damage was in the billions. Lives were lost and thousands of survivors' lives were seriously and in many cases forever disrupted. A few…

  19. Modelling the Growth of Swine Flu

    ERIC Educational Resources Information Center

    Thomson, Ian

    2010-01-01

    The spread of swine flu has been a cause of great concern globally. With no vaccine developed as yet, (at time of writing in July 2009) and given the fact that modern-day humans can travel speedily across the world, there are fears that this disease may spread out of control. The worst-case scenario would be one of unfettered exponential growth.…

  20. Between the Under-Labourer and the Master-Builder: Observations on Bunge's Method

    ERIC Educational Resources Information Center

    Agassi, Joseph

    2012-01-01

    Mario Bunge has repeatedly discussed contributions to philosophy and to science that are worthless at best and dangerous at worst, especially cases of pseudo-science. He clearly gives his reason in his latest essay on this matter: "The fact that science can be faked to the point of deceiving science lovers suggests the need for a rigorous sifting…

Top