Less than severe worst case accidents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, G.A.
1996-08-01
Many systems can provide tremendous benefit if operating correctly, produce only an inconvenience if they fail to operate, but have extreme consequences if they are only partially disabled such that they operate erratically or prematurely. In order to assure safety, systems are often tested against the most severe environments and accidents that are considered possible to ensure either safe operation or safe failure. However, it is often the less severe environments which result in the ``worst case accident`` since these are the conditions in which part of the system may be exposed or rendered unpredictable prior to total system failure.more » Some examples of less severe mechanical, thermal, and electrical environments which may actually be worst case are described as cautions for others in industries with high consequence operations or products.« less
30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.
Code of Federal Regulations, 2011 CFR
2011-07-01
... associated with the facility. In determining the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your...) For exploratory or development drilling operations, the size of your worst case discharge scenario is...
30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your scenario must discuss how to respond to... drilling operations, the size of your worst case discharge scenario is the daily volume possible from an...
Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty
1997-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
Robust Flutter Margin Analysis that Incorporates Flight Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Martin J.
1998-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
33 CFR 154.1120 - Operating restrictions and interim operating authorization.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Facility Operating in Prince William Sound, Alaska § 154.1120 Operating restrictions and interim operating authorization. (a) The owner or operator of a TAPAA facility may not operate in Prince William Sound, Alaska... practicable, a worst case discharge or a discharge of 200,000 barrels of oil, whichever is grater, in Prince...
Taylor, Lauren J; Nabozny, Michael J; Steffens, Nicole M; Tucholka, Jennifer L; Brasel, Karen J; Johnson, Sara K; Zelenski, Amy; Rathouz, Paul J; Zhao, Qianqian; Kwekkeboom, Kristine L; Campbell, Toby C; Schwarze, Margaret L
2017-06-01
Although many older adults prefer to avoid burdensome interventions with limited ability to preserve their functional status, aggressive treatments, including surgery, are common near the end of life. Shared decision making is critical to achieve value-concordant treatment decisions and minimize unwanted care. However, communication in the acute inpatient setting is challenging. To evaluate the proof of concept of an intervention to teach surgeons to use the Best Case/Worst Case framework as a strategy to change surgeon communication and promote shared decision making during high-stakes surgical decisions. Our prospective pre-post study was conducted from June 2014 to August 2015, and data were analyzed using a mixed methods approach. The data were drawn from decision-making conversations between 32 older inpatients with an acute nonemergent surgical problem, 30 family members, and 25 surgeons at 1 tertiary care hospital in Madison, Wisconsin. A 2-hour training session to teach each study-enrolled surgeon to use the Best Case/Worst Case communication framework. We scored conversation transcripts using OPTION 5, an observer measure of shared decision making, and used qualitative content analysis to characterize patterns in conversation structure, description of outcomes, and deliberation over treatment alternatives. The study participants were patients aged 68 to 95 years (n = 32), 44% of whom had 5 or more comorbid conditions; family members of patients (n = 30); and surgeons (n = 17). The median OPTION 5 score improved from 41 preintervention (interquartile range, 26-66) to 74 after Best Case/Worst Case training (interquartile range, 60-81). Before training, surgeons described the patient's problem in conjunction with an operative solution, directed deliberation over options, listed discrete procedural risks, and did not integrate preferences into a treatment recommendation. After training, surgeons using Best Case/Worst Case clearly presented a choice between treatments, described a range of postoperative trajectories including functional decline, and involved patients and families in deliberation. Using the Best Case/Worst Case framework changed surgeon communication by shifting the focus of decision-making conversations from an isolated surgical problem to a discussion about treatment alternatives and outcomes. This intervention can help surgeons structure challenging conversations to promote shared decision making in the acute setting.
Analysis of Separation Corridors for Visiting Vehicles from the International Space Station
NASA Technical Reports Server (NTRS)
Zaczek, Mariusz P.; Schrock, Rita R.; Schrock, Mark B.; Lowman, Bryan C.
2011-01-01
The International Space Station (ISS) is a very dynamic vehicle with many operational constraints that affect its performance, operations, and vehicle lifetime. Most constraints are designed to alleviate various safety concerns that are a result of dynamic activities between the ISS and various Visiting Vehicles (VVs). One such constraint that has been in place for Russian Vehicle (RV) operations is the limitation placed on Solar Array (SA) positioning in order to prevent collisions during separation and subsequent relative motion of VVs. An unintended consequence of the SA constraint has been the impacts to the operational flexibility of the ISS resulting from the reduced power generation capability as well as from a reduction in the operational lifetime of various SA components. The purpose of this paper is to discuss the technique and the analysis that were applied in order to relax the SA constraints for RV undockings, thereby improving both the ISS operational flexibility and extending its lifetime for many years to come. This analysis focused on the effects of the dynamic motion that occur both prior to and following RV separations. The analysis involved a parametric approach in the conservative application of various initial conditions and assumptions. These included the use of the worst case minimum and maximum vehicle configurations, worst case initial attitudes and attitude rates, and the worst case docking port separation dynamics. Separations were calculated for multiple ISS docking ports, at varied deviations from the nominal undocking attitudes and included the use of two separate attitude control schemes: continuous free-drift and a post separation attitude hold. The analysis required numerical propagation of both the separation motion and the vehicle attitudes using 3-degree-of-freedom (DOF) relative motion equations coupled with rigid body rotational dynamics to generate a large set of separation trajectories.
Worst case analysis: Earth sensor assembly for the tropical rainfall measuring mission observatory
NASA Technical Reports Server (NTRS)
Conley, Michael P.
1993-01-01
This worst case analysis verifies that the TRMMESA electronic design is capable of maintaining performance requirements when subjected to worst case circuit conditions. The TRMMESA design is a proven heritage design and capable of withstanding the most worst case and adverse of circuit conditions. Changes made to the baseline DMSP design are relatively minor and do not adversely effect the worst case analysis of the TRMMESA electrical design.
Method of Generating Transient Equivalent Sink and Test Target Temperatures for Swift BAT
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2004-01-01
The NASA Swift mission has a 600-km altitude and a 22 degrees maximum inclination. The sun angle varies from 45 degrees to 180 degrees in normal operation. As a result, environmental heat fluxes absorbed by the Burst Alert Telescope (BAT) radiator and loop heat pipe (LHP) compensation chambers (CCs) vary transiently. Therefore the equivalent sink temperatures for the radiator and CCs varies transiently. In thermal performance verification testing in vacuum, the radiator and CCs radiated heat to sink targets. This paper presents an analytical technique for generating orbit transient equivalent sink temperatures and a technique for generating transient sink target temperatures for the radiator and LHP CCs. Using these techniques, transient target temperatures for the radiator and LHP CCs were generated for three thermal environmental cases: worst hot case, worst cold case, and cooldown and warmup between worst hot case in sunlight and worst cold case in the eclipse, and three different heat transport values: 128 W, 255 W, and 382 W. The 128 W case assumed that the two LHPs transport 255 W equally to the radiator. The 255 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator. The 382 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator, and has a 50% design margin. All these transient target temperatures were successfully implemented in the engineering test unit (ETU) LHP and flight LHP thermal performance verification tests in vacuum.
A Worst-Case Approach for On-Line Flutter Prediction
NASA Technical Reports Server (NTRS)
Lind, Rick C.; Brenner, Martin J.
1998-01-01
Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.
SU-E-T-551: PTV Is the Worst-Case of CTV in Photon Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, D; Liu, W; Park, P
2014-06-01
Purpose: To examine the supposition of the static dose cloud and adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution for photon therapy in head and neck (H and N) plans. Methods: Five diverse H and N plans clinically delivered at our institution were selected. Isocenter for each plan was shifted positively and negatively in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (3 mm) for a total of six shifted plans per original plan. The perturbed plan dose was recalculated inmore » Eclipse (AAA v11.0.30) using the same, fixed fluence map as the original plan. The dose distributions for all plans were exported from the treatment planning system to determine the worst-case CTV dose distributions for each nominal plan. Two worst-case distributions, cold and hot, were defined by selecting the minimum or maximum dose per voxel from all the perturbed plans. The resulting dose volume histograms (DVH) were examined to evaluate the worst-case CTV and nominal PTV dose distributions. Results: Inspection demonstrates that the CTV DVH in the nominal dose distribution is indeed bounded by the CTV DVHs in the worst-case dose distributions. Furthermore, comparison of the D95% for the worst-case (cold) CTV and nominal PTV distributions by Pearson's chi-square test shows excellent agreement for all plans. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the five plans under examination. Although the worst-case dose distributions are unphysical since the dose per voxel is chosen independently, the cold worst-case distribution serves as a lower bound for the worst-case possible CTV coverage. Minor discrepancies between the nominal PTV dose distribution and worst-case CTV dose distribution are expected since the dose cloud is not strictly static. This research was supported by the NCI through grant K25CA168984, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, and by the Fraternal Order of Eagles Cancer Research Fund, the Career Development Award Program at Mayo Clinic.« less
Time Safety Margin: Theory and Practice
2016-09-01
Basic Dive Recovery Terminology The Simplest Definition of TSM: Time Safety Margin is the time to directly travel from the worst-case vector to an...Safety Margin (TSM). TSM is defined as the time in seconds to directly travel from the worst case vector (i.e. worst case combination of parameters...invoked by this AFI, base recovery planning and risk management upon the calculated TSM. TSM is the time in seconds to di- rectly travel from the worst case
49 CFR 238.431 - Brake system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow... a brake rate consistent with prevailing adhesion, passenger safety, and brake system thermal... adhesion control system designed to automatically adjust the braking force on each wheel to prevent sliding...
40 CFR 300.135 - Response operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY... discharge is a worst case discharge as discussed in § 300.324; the pathways to human and environmental exposure; the potential impact on human health, welfare, and safety and the environment; whether the...
Thermal-hydraulic analysis of N Reactor graphite and shield cooling system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, J.O.; Schmitt, B.E.
1988-02-01
A series of bounding (worst-case) calculations were performed using a detailed hydrodynamic RELAP5 model of the N Reactor graphite and shield cooling system (GSCS). These calculations were specifically aimed to answer issues raised by the Westinghouse Independent Safety Review (WISR) committee. These questions address the operability of the GSCS during a worst-case degraded-core accident that requires the GDCS to mitigate the consequences of the accident. An accident scenario previously developed was designed as the hydrogen-mitigation design-basis accident (HMDBA). Previous HMDBA heat transfer analysis,, using the TRUMP-BD code, was used to define the thermal boundary conditions that the GSDS may bemore » exposed to. These TRUMP/HMDBA analysis results were used to define the bounding operating conditions of the GSCS during the course of an HMDBA transient. Nominal and degraded GSCS scenarios were investigated using RELAP5 within or at the bounds of the HMDBA transient. 10 refs., 42 figs., 10 tabs.« less
Specifying design conservatism: Worst case versus probabilistic analysis
NASA Technical Reports Server (NTRS)
Miles, Ralph F., Jr.
1993-01-01
Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.
NASA Technical Reports Server (NTRS)
Avila, Arturo
2011-01-01
The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 2 2012-07-01 2012-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
30 CFR 253.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2010 CFR
2010-07-01
...: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000 bbls but not more than... must demonstrate OSFR in accordance with the following table: COF worst case oil-spill discharge volume... applicable table in paragraph (b)(1) or (b)(2) for a facility with a potential worst case oil-spill discharge...
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 2 2013-07-01 2013-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 2 2014-07-01 2014-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
49 CFR 238.431 - Brake system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Brake system. 238.431 Section 238.431... Equipment § 238.431 Brake system. (a) A passenger train's brake system shall be capable of stopping the... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2013 CFR
2013-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2012 CFR
2012-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2014 CFR
2014-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 2 2011-07-01 2011-07-01 false How do I determine the worst case oil-spill... ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To...
30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How do I determine the worst case oil-spill... INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate the amount...
Lower bound for LCD image quality
NASA Astrophysics Data System (ADS)
Olson, William P.; Balram, Nikhil
1996-03-01
The paper presents an objective lower bound for the discrimination of patterns and fine detail in images on a monochrome LCD. In applications such as medical imaging and military avionics the information of interest is often at the highest frequencies in the image. Since LCDs are sampled data systems, their output modulation is dependent on the phase between the input signal and the sampling points. This phase dependence becomes particularly significant at high spatial frequencies. In order to use an LCD for applications such as those mentioned above it is essential to have a lower (worst case) bound on the performance of the display. We address this problem by providing a mathematical model for the worst case output modulation of an LCD in response to a sine wave input. This function can be interpreted as a worst case modulation transfer function (MTF). The intersection of the worst case MTF with the contrast threshold function (CTF) of the human visual system defines the highest spatial frequency that will always be detectable. In addition to providing the worst case limiting resolution, this MTF is combined with the CTF to produce objective worst case image quality values using the modulation transfer function area (MTFA) metric.
Probabilistic Solar Energetic Particle Models
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.
2011-01-01
To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.
A Minimax Network Flow Model for Characterizing the Impact of Slot Restrictions
NASA Technical Reports Server (NTRS)
Lee, Douglas W.; Patek, Stephen D.; Alexandrov, Natalia; Bass, Ellen J.; Kincaid, Rex K.
2010-01-01
This paper proposes a model for evaluating long-term measures to reduce congestion at airports in the National Airspace System (NAS). This model is constructed with the goal of assessing the global impacts of congestion management strategies, specifically slot restrictions. We develop the Minimax Node Throughput Problem (MINNTHRU), a multicommodity network flow model that provides insight into air traffic patterns when one minimizes the worst-case operation across all airports in a given network. MINNTHRU is thus formulated as a model where congestion arises from network topology. It reflects not market-driven airline objectives, but those of a regulatory authority seeking a distribution of air traffic beneficial to all airports, in response to congestion management measures. After discussing an algorithm for solving MINNTHRU for moderate-sized (30 nodes) and larger networks, we use this model to study the impacts of slot restrictions on the operation of an entire hub-spoke airport network. For both a small example network and a medium-sized network based on 30 airports in the NAS, we use MINNTHRU to demonstrate that increasing the severity of slot restrictions increases the traffic around unconstrained hub airports as well as the worst-case level of operation over all airports.
Cundell, A M; Bean, R; Massimore, L; Maier, C
1998-01-01
To determine the relationship between the sampling time of the environmental monitoring, i.e., viable counts, in aseptic filling areas and the microbial count and frequency of alerts for air, surface and personnel microbial monitoring, statistical analyses were conducted on 1) the frequency of alerts versus the time of day for routine environmental sampling conducted in calendar year 1994, and 2) environmental monitoring data collected at 30-minute intervals during routine aseptic filling operations over two separate days in four different clean rooms with multiple shifts and equipment set-ups at a parenteral manufacturing facility. Statistical analyses showed, except for one floor location that had significantly higher number of counts but no alert or action level samplings in the first two hours of operation, there was no relationship between the number of counts and the time of sampling. Further studies over a 30-day period at the floor location showed no relationship between time of sampling and microbial counts. The conclusion reached in the study was that there is no worst case time for environmental monitoring at that facility and that sampling any time during the aseptic filling operation will give a satisfactory measure of the microbial cleanliness in the clean room during the set-up and aseptic filling operation.
Zhu, Zhengfei; Liu, Wei; Gillin, Michael; Gomez, Daniel R; Komaki, Ritsuko; Cox, James D; Mohan, Radhe; Chang, Joe Y
2014-05-06
We assessed the robustness of passive scattering proton therapy (PSPT) plans for patients in a phase II trial of PSPT for stage III non-small cell lung cancer (NSCLC) by using the worst-case scenario method, and compared the worst-case dose distributions with the appearance of locally recurrent lesions. Worst-case dose distributions were generated for each of 9 patients who experienced recurrence after concurrent chemotherapy and PSPT to 74 Gy(RBE) for stage III NSCLC by simulating and incorporating uncertainties associated with set-up, respiration-induced organ motion, and proton range in the planning process. The worst-case CT scans were then fused with the positron emission tomography (PET) scans to locate the recurrence. Although the volumes enclosed by the prescription isodose lines in the worst-case dose distributions were consistently smaller than enclosed volumes in the nominal plans, the target dose coverage was not significantly affected: only one patient had a recurrence outside the prescription isodose lines in the worst-case plan. PSPT is a relatively robust technique. Local recurrence was not associated with target underdosage resulting from estimated uncertainties in 8 of 9 cases.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Bartnicki, Jerzy; Amundsen, Ingar; Brown, Justin; Hosseini, Ali; Hov, Øystein; Haakenstad, Hilde; Klein, Heiko; Lind, Ole Christian; Salbu, Brit; Szacinski Wendel, Cato C; Ytre-Eide, Martin Album
2016-01-01
The Russian nuclear submarine K-27 suffered a loss of coolant accident in 1968 and with nuclear fuel in both reactors it was scuttled in 1981 in the outer part of Stepovogo Bay located on the eastern coast of Novaya Zemlya. The inventory of spent nuclear fuel on board the submarine is of concern because it represents a potential source of radioactive contamination of the Kara Sea and a criticality accident with potential for long-range atmospheric transport of radioactive particles cannot be ruled out. To address these concerns and to provide a better basis for evaluating possible radiological impacts of potential releases in case a salvage operation is initiated, we assessed the atmospheric transport of radionuclides and deposition in Norway from a hypothetical criticality accident on board the K-27. To achieve this, a long term (33 years) meteorological database has been prepared and used for selection of the worst case meteorological scenarios for each of three selected locations of the potential accident. Next, the dispersion model SNAP was run with the source term for the worst-case accident scenario and selected meteorological scenarios. The results showed predictions to be very sensitive to the estimation of the source term for the worst-case accident and especially to the sizes and densities of released radioactive particles. The results indicated that a large area of Norway could be affected, but that the deposition in Northern Norway would be considerably higher than in other areas of the country. The simulations showed that deposition from the worst-case scenario of a hypothetical K-27 accident would be at least two orders of magnitude lower than the deposition observed in Norway following the Chernobyl accident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
LANDSAT-D MSS/TM tuned orbital jitter analysis model LDS900
NASA Technical Reports Server (NTRS)
Pollak, T. E.
1981-01-01
The final LANDSAT-D orbital dynamic math model (LSD900), comprised of all test validated substructures, was used to evaluate the jitter response of the MSS/TM experiments. A dynamic forced response analysis was performed at both the MSS and TM locations on all structural modes considered (thru 200 Hz). The analysis determined the roll angular response of the MSS/TM experiments to improve excitation generated by component operation. Cross axis and cross experiment responses were also calculated. The excitations were analytically represented by seven and nine term Fourier series approximations, for the MSS and TM experiment respectively, which enabled linear harmonic solution techniques to be applied to response calculations. Single worst case jitter was estimated by variations of the eigenvalue spectrum of model LSD 900. The probability of any worst case mode occurrence was investigated.
78 FR 53494 - Dam Safety Modifications at Cherokee, Fort Loudoun, Tellico, and Watts Bar Dams
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... fundamental part of this mission was the construction and operation of an integrated system of dams and... by the Federal Emergency Management Agency, TVA prepares for the worst case flooding event in order... appropriate best management practices during all phases of construction and maintenance associated with the...
A Comparison of Learning Technologies for Teaching Spacecraft Software Development
ERIC Educational Resources Information Center
Straub, Jeremy
2014-01-01
The development of software for spacecraft represents a particular challenge and is, in many ways, a worst case scenario from a design perspective. Spacecraft software must be "bulletproof" and operate for extended periods of time without user intervention. If the software fails, it cannot be manually serviced. Software failure may…
Multiple Microcomputer Control Algorithm.
1979-09-01
discrete and semaphore supervisor calls can be used with tasks in separate processors, in which case they are maintained in shared memory. Operations on ...the source or destination operand specifier of each mode in most cases . However, four of the 16 general register addressing modes and one of the 8 pro...instruction time is based on the specified usage factors and the best cast, and worst case execution times for the instruc- 1I 5 1NAVTRAEQZJ1PCrN M’.V7~j
Space Based Intelligence, Surveillance, and Reconnaissance Contribution to Global Strike in 2035
2012-02-15
include using high altitude air platforms and airships as a short-term solution, and small satellites with an Operationally Responsive Space (ORS) launch...irreversible threats, along with a worst case scenario. Section IV provides greater detail of the high altitude air platform, airship , and commercial space...Resultantly, the U.S. could use high altitude air platforms, airships , and cyber to complement its space systems in case of denial, degradation, or
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.
Updated model assessment of pollution at major U. S. airports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamartino, R.J.; Rote, D.M.
1979-02-01
The air quality impact of aircraft at and around Los Angeles International Airport (LAX) was simulated for hours of peak aircraft operation and 'worst case' pollutant dispersion conditions by using an updated version of the Argonne Airport Vicinity Air Pollution model; field programs at LAX, O'Hara, and John F. Kennedy International Airports determined the 'worst case' conditions. Maximum carbon monoxide concentrations at LAX were low relative to National Ambient Air Quality Standards; relatively high and widespread hydrocarbon concentrations indicated that aircraft emissions may aggravate oxidant problems near the airport; nitrogen oxide concentrations were close to the levels set in proposedmore » standards. Data on typical time-in-mode for departing and arriving aircraft, the 8/4/77 diurnal variation in airport activity, and carbon monoxide concentration isopleths are given, and the update factors in the model are discussed.« less
VEGA Launch Vehicle Dynamic Environment: Flight Experience and Qualification Status
NASA Astrophysics Data System (ADS)
Di Trapani, C.; Fotino, D.; Mastrella, E.; Bartoccini, D.; Bonnet, M.
2014-06-01
VEGA Launch Vehicle (LV) during flight is equipped with more than 400 sensors (pressure transducers, accelerometers, microphones, strain gauges...) aimed to catch the physical phenomena occurring during the mission. Main objective of these sensors is to verify that the flight conditions are compliant with the launch vehicle and satellite qualification status and to characterize the phenomena that occur during flight. During VEGA development, several test campaigns have been performed in order to characterize its dynamic environment and identify the worst case conditions, but only with the flight data analysis is possible to confirm the worst cases identified and check the compliance of the operative life conditions with the components qualification status.Scope of the present paper is to show a comparison of the sinusoidal dynamic phenomena that occurred during VEGA first and second flight and give a summary of the launch vehicle qualification status.
40 CFR 57.405 - Formulation, approval, and implementation of requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... study shall be submitted after the end of the worst case three-month period as a part of the next semi... study demonstrating that the SCS will prevent violations of the NAAQS in the smelter's DLA at all times. The reliability study shall include a comprehensive analysis of the system's operation during one or...
Jain, Dhruv; Tikku, Gargi; Bhadana, Pallavi; Dravid, Chandrashekhar; Grover, Rajesh Kumar
2017-08-01
We investigated World Health Organization (WHO) grading and pattern of invasion based histological schemes as independent predictors of disease-free survival, in oral squamous carcinoma patients. Tumor resection slides of eighty-seven oral squamous carcinoma patients [pTNM: I&II/III&IV-32/55] were evaluated. Besides examining various patterns of invasion, invasive front grade, predominant and worst (highest) WHO grade were recorded. For worst WHO grading, poor-undifferentiated component was estimated semi-quantitatively at advancing tumor edge (invasive growth front) in histology sections. Tumor recurrence was observed in 31 (35.6%) cases. The 2-year disease-free survival was 47% [Median: 656; follow-up: 14-1450] days. Using receiver operating characteristic curves, we defined poor-undifferentiated component exceeding 5% of tumor as the cutoff to assign an oral squamous carcinoma as grade-3, when following worst WHO grading. Kaplan-Meier curves for disease-free survival revealed prognostic association with nodal involvement, tumor size, worst WHO grading; most common pattern of invasion and invasive pattern grading score (sum of two most predominant patterns of invasion). In further multivariate analysis, tumor size (>2.5cm) and worst WHO grading (grade-3 tumors) independently predicted reduced disease-free survival [HR, 2.85; P=0.028 and HR, 3.37; P=0.031 respectively]. The inter-observer agreement was moderate for observers who semi-quantitatively estimated percentage of poor-undifferentiated morphology in oral squamous carcinomas. Our results support the value of semi-quantitative method to assign tumors as grade-3 with worst WHO grading for predicting reduced disease-free survival. Despite limitations, of the various histological tumor stratification schemes, WHO grading holds adjunctive value for its prognostic role, ease and universal familiarity. Copyright © 2017 Elsevier Inc. All rights reserved.
On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI.
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2017-06-21
The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.
On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI
NASA Astrophysics Data System (ADS)
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2017-06-01
The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.
Analysis of critical operating conditions for LV distribution networks with microgrids
NASA Astrophysics Data System (ADS)
Zehir, M. A.; Batman, A.; Sonmez, M. A.; Font, A.; Tsiamitros, D.; Stimoniaris, D.; Kollatou, T.; Bagriyanik, M.; Ozdemir, A.; Dialynas, E.
2016-11-01
Increase in the penetration of Distributed Generation (DG) in distribution networks, raises the risk of voltage limit violations while contributing to line losses. Especially in low voltage (LV) distribution networks (secondary distribution networks), impacts of active power flows on the bus voltages and on the network losses are more dominant. As network operators must meet regulatory limitations, they have to take into account the most critical operating conditions in their systems. In this study, it is aimed to present the impact of the worst operation cases of LV distribution networks comprising microgrids. Simulation studies are performed on a field data-based virtual test-bed. The simulations are repeated for several cases consisting different microgrid points of connection with different network loading and microgrid supply/demand conditions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Code of Federal Regulations, 2014 CFR
2014-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Code of Federal Regulations, 2013 CFR
2013-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.
An SEU resistant 256K SOI SRAM
NASA Astrophysics Data System (ADS)
Hite, L. R.; Lu, H.; Houston, T. W.; Hurta, D. S.; Bailey, W. E.
1992-12-01
A novel SEU (single event upset) resistant SRAM (static random access memory) cell has been implemented in a 256K SOI (silicon on insulator) SRAM that has attractive performance characteristics over the military temperature range of -55 to +125 C. These include worst-case access time of 40 ns with an active power of only 150 mW at 25 MHz, and a worst-case minimum WRITE pulse width of 20 ns. Measured SEU performance gives an Adams 10 percent worst-case error rate of 3.4 x 10 exp -11 errors/bit-day using the CRUP code with a conservative first-upset LET threshold. Modeling does show that higher bipolar gain than that measured on a sample from the SRAM lot would produce a lower error rate. Measurements show the worst-case supply voltage for SEU to be 5.5 V. Analysis has shown this to be primarily caused by the drain voltage dependence of the beta of the SOI parasitic bipolar transistor. Based on this, SEU experiments with SOI devices should include measurements as a function of supply voltage, rather than the traditional 4.5 V, to determine the worst-case condition.
Code of Federal Regulations, 2011 CFR
2011-07-01
... evaluation criteria for facilities that handle, store, or transport Group V petroleum oils. 154.1047 Section... Group V petroleum oils. (a) An owner or operator of a facility that handles, stores, or transports Group...) Procedures and strategies for responding to a worst case discharge of Group V petroleum oils to the maximum...
Code of Federal Regulations, 2013 CFR
2013-07-01
... evaluation criteria for facilities that handle, store, or transport Group V petroleum oils. 154.1047 Section... Group V petroleum oils. (a) An owner or operator of a facility that handles, stores, or transports Group...) Procedures and strategies for responding to a worst case discharge of Group V petroleum oils to the maximum...
Code of Federal Regulations, 2010 CFR
2010-07-01
... evaluation criteria for facilities that handle, store, or transport Group V petroleum oils. 154.1047 Section... Group V petroleum oils. (a) An owner or operator of a facility that handles, stores, or transports Group...) Procedures and strategies for responding to a worst case discharge of Group V petroleum oils to the maximum...
Code of Federal Regulations, 2014 CFR
2014-07-01
... evaluation criteria for facilities that handle, store, or transport Group V petroleum oils. 154.1047 Section... Group V petroleum oils. (a) An owner or operator of a facility that handles, stores, or transports Group...) Procedures and strategies for responding to a worst case discharge of Group V petroleum oils to the maximum...
Code of Federal Regulations, 2012 CFR
2012-07-01
... evaluation criteria for facilities that handle, store, or transport Group V petroleum oils. 154.1047 Section... Group V petroleum oils. (a) An owner or operator of a facility that handles, stores, or transports Group...) Procedures and strategies for responding to a worst case discharge of Group V petroleum oils to the maximum...
Chi, Ching-Chi; Wang, Shu-Hui
2014-01-01
Compared to conventional therapies, biologics are more effective but expensive in treating psoriasis. To evaluate the efficacy and cost-efficacy of biologic therapies for psoriasis. We conducted a meta-analysis to calculate the efficacy of etanercept, adalimumab, infliximab, and ustekinumab for at least 75% reduction in the Psoriasis Area and Severity Index score (PASI 75) and Physician's Global Assessment clear/minimal (PGA 0/1). The cost-efficacy was assessed by calculating the incremental cost-effectiveness ratio (ICER) per subject achieving PASI 75 and PGA 0/1. The incremental efficacy regarding PASI 75 was 55% (95% confidence interval (95% CI) 38%-72%), 63% (95% CI 59%-67%), 71% (95% CI 67%-76%), 67% (95% CI 62%-73%), and 72% (95% CI 68%-75%) for etanercept, adalimumab, infliximab, and ustekinumab 45 mg and 90 mg, respectively. The corresponding 6-month ICER regarding PASI 75 was $32,643 (best case $24,936; worst case $47,246), $21,315 (best case $20,043; worst case $22,760), $27,782 (best case $25,954; worst case $29,440), $25,055 (best case $22,996; worst case $27,075), and $46,630 (best case $44,765; worst case $49,373), respectively. The results regarding PGA 0/1 were similar. Infliximab and ustekinumab 90 mg had the highest efficacy. Meanwhile, adalimumab had the best cost-efficacy, followed by ustekinumab 45 mg and infliximab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Wang, X; Li, H
Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less
Instability study for LOFT for L2-1, L2-2, and L2-3 pretest steady-state operating conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eide, S.A.
The results are presented of a thermal-hydrodynamic flow instability study of the LOFT reactor for the L2-1, L2-2, and L2-3 pretest steady-state operating conditions. Comparison is made between the LOFT reactor and a typical PWR, and the effects on stability of differences in operating parameters and geometry are discussed. Results indicate that the LOFT reactor will be thermal-hydrodynamically stable for nominal and worst case operating conditions. The study supports the LOFT Experimental Safety Analyses for the L2-1, L2-2, and L2-3 tests.
Preparing for the Worst-Case Scenario--Planning Pays Off with First-Ever Stadium Evacuation
ERIC Educational Resources Information Center
Bradley, Carol C.
2012-01-01
It's Saturday, September 3, Notre Dame vs. South Florida--the first home game of the season--and nearing halftime. The only person feeling more pressure than Head Coach Brian Kelly is Mike Seamon, associate vice president for campus safety and director of game day operations. Bad weather is on the way, and he's about to make the call to evacuate…
Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry
2011-03-01
For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512
Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe
2015-01-01
We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Derivation and experimental verification of clock synchronization theory
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
1994-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift
NASA Astrophysics Data System (ADS)
Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.
2012-01-01
This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.
Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin
2007-04-01
This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.
Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustavson, T.C.
1979-01-01
Results are presented of a study to determine the acoustical noise distribution and impacts of the geothermal/geopressure well drilling operation near Chocolate Bayou in South Texas. Detailed noise survey data were included in a part of the study for computer simulations to develop representative and worst-case drilling operation noise predictions. Also conducted were baseline noise measurements throughout the Peterson Landing residential area. (MHT)
Performance of alkaline battery cells used in emergency locator transmitters
NASA Technical Reports Server (NTRS)
Haynes, G. A.; Sokol, S.; Motley, W. R., III; Mcclelland, E. L.
1984-01-01
The characteristics of battery power supplies for emergency locator transmitters (ELT's) were investigated by testing alkaline zinc/manganese dioxide cells of the type typically used in ELT's. Cells from four manufacturers were tested. The cells were subjected to simulated environmental and load conditions representative of those required for survival and operation. Battery cell characteristics that may contribute to ELT malfunctions and limitations were evaluated. Experimental results from the battery cell study are discussed, and an evaluation of ELT performance while operating under a representative worst-case environmental condition is presented.
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 28 2011-07-01 2011-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 29 2012-07-01 2012-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 29 2013-07-01 2013-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 28 2014-07-01 2014-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 1
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
Electrical characterization and qualification tests were performed on the RCA MWS5001D, 1024 by 1-bit, CMOS, random access memory. Characterization tests were performed on five devices. The tests included functional tests, AC parametric worst case pattern selection test, determination of worst-case transition for setup and hold times and a series of schmoo plots. The qualification tests were performed on 32 devices and included a 2000 hour burn in with electrical tests performed at 0 hours and after 168, 1000, and 2000 hours of burn in. The tests performed included functional tests and AC and DC parametric tests. All of the tests in the characterization phase, with the exception of the worst-case transition test, were performed at ambient temperatures of 25, -55 and 125 C. The worst-case transition test was performed at 25 C. The preburn in electrical tests were performed at 25, -55, and 125 C. All burn in endpoint tests were performed at 25, -40, -55, 85, and 125 C.
Draft Environmental Impact Statement-Consolidated Space Operations Center.
1980-10-01
about 10 in the human resonance (worst case) region of the electromagnetic spectrum. The USAF has a significant Radiofrequency Radiation (RFR...accessis necessary. I 3. Restrict all air traffic within 1000 feet of the antenna - field to avoid possible exposure of electroexplosive de- vices to...Permissible Exposure Level (MPEL) for occupational personnel is 10 mw/cm 2 and is based on past knowledge of radiofrequency radiation effects (Ref
Pavell, Anthony; Hughes, Keith A
2010-01-01
This article describes a method for achieving the load equivalence model, described in Parenteral Drug Association Technical Report 1, using a mass-based approach. The item and load bracketing approach allows for mixed equipment load size variation for operational flexibility along with decreased time to introduce new items to the operation. The article discusses the utilization of approximately 67 items/components (Table IV) identified for routine sterilization with varying quantities required weekly. The items were assessed for worst-case identification using four temperature-related criteria. The criteria were used to provide a data-based identification of worst-case items, and/or item equivalence, to carry forward into cycle validation using a variable load pattern. The mass approach to maximum load determination was used to bracket routine production use and allows for variable loading patterns. The result of the item mapping and load bracketing data is "a proven acceptable range" of sterilizing conditions including loading configuration and location. The application of these approaches, while initially more time/test-intensive than alternate approaches, provides a method of cycle validation with long-term benefit of ease of ongoing qualification, minimizing time and requirements for new equipment qualification for similar loads/use, and for rapid and rigorous assessment of new items for sterilization.
Query Optimization in Distributed Databases.
1982-10-01
general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are
ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*
Kim, Donghwan; Fessler, Jeffrey A.
2017-01-01
This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242
Grieger, Khara D; Hansen, Steffen F; Sørensen, Peter B; Baun, Anders
2011-09-01
Conducting environmental risk assessment of engineered nanomaterials has been an extremely challenging endeavor thus far. Moreover, recent findings from the nano-risk scientific community indicate that it is unlikely that many of these challenges will be easily resolved in the near future, especially given the vast variety and complexity of nanomaterials and their applications. As an approach to help optimize environmental risk assessments of nanomaterials, we apply the Worst-Case Definition (WCD) model to identify best estimates for worst-case conditions of environmental risks of two case studies which use engineered nanoparticles, namely nZVI in soil and groundwater remediation and C(60) in an engine oil lubricant. Results generated from this analysis may ultimately help prioritize research areas for environmental risk assessments of nZVI and C(60) in these applications as well as demonstrate the use of worst-case conditions to optimize future research efforts for other nanomaterials. Through the application of the WCD model, we find that the most probable worst-case conditions for both case studies include i) active uptake mechanisms, ii) accumulation in organisms, iii) ecotoxicological response mechanisms such as reactive oxygen species (ROS) production and cell membrane damage or disruption, iv) surface properties of nZVI and C(60), and v) acute exposure tolerance of organisms. Additional estimates of worst-case conditions for C(60) also include the physical location of C(60) in the environment from surface run-off, cellular exposure routes for heterotrophic organisms, and the presence of light to amplify adverse effects. Based on results of this analysis, we recommend the prioritization of research for the selected applications within the following areas: organism active uptake ability of nZVI and C(60) and ecotoxicological response end-points and response mechanisms including ROS production and cell membrane damage, full nanomaterial characterization taking into account detailed information on nanomaterial surface properties, and investigations of dose-response relationships for a variety of organisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects
ERIC Educational Resources Information Center
Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor
2011-01-01
Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2014 CFR
2014-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2012 CFR
2012-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2013 CFR
2013-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
Parameter Impact on Sharing Studies Between UAS CNPC Satellite Transmitters and Terrestrial Systems
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Wilson, Jeffrey D.; Bishop, William D.
2015-01-01
In order to provide a control and non-payload communication (CNPC) link for civil-use unmanned aircraft systems (UAS) when operating in beyond-line-of-sight (BLOS) conditions, satellite communication links are generally required. The International Civil Aviation Organization (ICAO) has determined that the CNPC link must operate over protected aviation safety spectrum allocations. Although a suitable allocation exists in the 5030-5091 MHz band, no satellites provide operations in this band and none are currently planned. In order to avoid a very lengthy delay in the deployment of UAS in BLOS conditions, it has been proposed to use existing satellites operating in the Fixed Satellite Service (FSS), of which many operate in several spectrum bands. Regulatory actions by the International Telecommunications Union (ITU) are needed to enable such a use on an international basis, and indeed Agenda Item (AI) 1.5 for the 2015 World Radiocommunication Conference (WRC) was established to decide on the enactment of possible regulatory provisions. As part of the preparation for AI 1.5, studies on the sharing FSS bands between existing services and CNPC for UAS are being contributed by NASA and others. These studies evaluate the potential impact of satellite CNPC transmitters operating from UAS on other in-band services, and on the potential impact of other in-band services on satellite CNPC receivers operating on UAS platforms. Such studies are made more complex by the inclusion of what are essentially moving FSS earth stations, compared to typical sharing studies between fixed elements. Hence, the process of determining the appropriate technical parameters for the studies meets with difficulty. In order to enable a sharing study to be completed in a less-than-infinite amount of time, the number of parameters exercised must be greatly limited. Therefore, understanding the impact of various parameter choices is accomplished through selectivity analyses. In the case of sharing studies for AI 1.5, identification of worst-case parameters allows the studies to be focused on worst-case scenarios with assurance that other parameter combinations will yield comparatively better results and therefore do not need to be fully analyzed. In this paper, the results of such sensitivity analyses are presented for the case of sharing between UAS CNPC satellite transmitters and terrestrial receivers using the Fixed Service (FS) operating in the same bands, and the implications of these analyses on sharing study results.
Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien
2016-06-01
A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Guide for Oxygen Component Qualification Tests
NASA Technical Reports Server (NTRS)
Bamford, Larry J.; Rucker, Michelle A.; Dobbin, Douglas
1996-01-01
Although oxygen is a chemically stable element, it is not shock sensitive, will not decompose, and is not flammable. Oxygen use therefore carries a risk that should never be overlooked, because oxygen is a strong oxidizer that vigorously supports combustion. Safety is of primary concern in oxygen service. To promote safety in oxygen systems, the flammability of materials used in them should be analyzed. At the NASA White Sands Test Facility (WSTF), we have performed configurational tests of components specifically engineered for oxygen service. These tests follow a detailed WSTF oxygen hazards analysis. The stated objective of the tests was to provide performance test data for customer use as part of a qualification plan for a particular component in a particular configuration, and under worst-case conditions. In this document - the 'Guide for Oxygen Component Qualification Tests' - we outline recommended test systems, and cleaning, handling, and test procedures that address worst-case conditions. It should be noted that test results apply specifically to: manual valves, remotely operated valves, check valves, relief valves, filters, regulators, flexible hoses, and intensifiers. Component systems are not covered.
A radiation briefer's guide to the PIKE Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steadman, Jr, C R
1990-03-01
Gamma-radiation-exposure estimates to populations living immediately downwind from the Nevada Test Site have been required for many years by the US Department of Energy (DOE) before each containment-designed nuclear detonation. A highly unlikely worst-case'' scenario is utilized which assumes that there will be an accidental massive venting of radioactive debris into the atmosphere shortly after detonation. The Weather Service Nuclear Support Office (WSNSO) has supplied DOE with such estimates for the last 25 years using the WSNSO Fallout Scaling Technique (FOST), which employs a worst-case analog event that actually occurred in the past. The PIKE Model'' is the application ofmore » the FOST using the PIKE nuclear event as the analog. This report, which is primarily intended for WSNSO meteorologists who derive radiation estimates, gives a brief history of the model,'' presents the mathematical, radiological, and meteorological concepts upon which it is based, states its limitations, explains it apparent advantages over more sophisticated models, and details how it is used operationally. 10 refs., 31 figs., 7 tabs.« less
41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...
Radiation Assurance for the Space Environment
NASA Technical Reports Server (NTRS)
Barth, Janet L.; LaBel, Kenneth A.; Poivey, Christian
2004-01-01
The space radiation environment can lead to extremely harsh operating conditions for spacecraft electronic systems. A hardness assurance methodology must be followed to assure that the space radiation environment does not compromise the functionality and performance of space-based systems during the mission lifetime. The methodology includes a definition of the radiation environment, assessment of the radiation sensitivity of parts, worst-case analysis of the impact of radiation effects, and part acceptance decisions which are likely to include mitigation measures.
Feedback system design with an uncertain plant
NASA Technical Reports Server (NTRS)
Milich, D.; Valavani, L.; Athans, M.
1986-01-01
A method is developed to design a fixed-parameter compensator for a linear, time-invariant, SISO (single-input single-output) plant model characterized by significant structured, as well as unstructured, uncertainty. The controller minimizes the H(infinity) norm of the worst-case sensitivity function over the operating band and the resulting feedback system exhibits robust stability and robust performance. It is conjectured that such a robust nonadaptive control design technique can be used on-line in an adaptive control system.
The contribution of low-energy protons to the total on-orbit SEU rate
Dodds, Nathaniel Anson; Martinez, Marino J.; Dodd, Paul E.; ...
2015-11-10
Low- and high-energy proton experimental data and error rate predictions are presented for many bulk Si and SOI circuits from the 20-90 nm technology nodes to quantify how much low-energy protons (LEPs) can contribute to the total on-orbit single-event upset (SEU) rate. Every effort was made to predict LEP error rates that are conservatively high; even secondary protons generated in the spacecraft shielding have been included in the analysis. Across all the environments and circuits investigated, and when operating within 10% of the nominal operating voltage, LEPs were found to increase the total SEU rate to up to 4.3 timesmore » as high as it would have been in the absence of LEPs. Therefore, the best approach to account for LEP effects may be to calculate the total error rate from high-energy protons and heavy ions, and then multiply it by a safety margin of 5. If that error rate can be tolerated then our findings suggest that it is justified to waive LEP tests in certain situations. Trends were observed in the LEP angular responses of the circuits tested. As a result, grazing angles were the worst case for the SOI circuits, whereas the worst-case angle was at or near normal incidence for the bulk circuits.« less
Thermal Analysis of Iodine Satellite (iSAT)
NASA Technical Reports Server (NTRS)
Mauro, Stephanie
2015-01-01
This paper presents the progress of the thermal analysis and design of the Iodine Satellite (iSAT). The purpose of the iSAT spacecraft (SC) is to demonstrate the ability of the iodine Hall Thruster propulsion system throughout a one year mission in an effort to mature the system for use on future satellites. The benefit of this propulsion system is that it uses a propellant, iodine, that is easy to store and provides a high thrust-to-mass ratio. The spacecraft will also act as a bus for an earth observation payload, the Long Wave Infrared (LWIR) Camera. Four phases of the mission, determined to either be critical to achieving requirements or phases of thermal concern, are modeled. The phases are the Right Ascension of the Ascending Node (RAAN) Change, Altitude Reduction, De-Orbit, and Science Phases. Each phase was modeled in a worst case hot environment and the coldest phase, the Science Phase, was also modeled in a worst case cold environment. The thermal environments of the spacecraft are especially important to model because iSAT has a very high power density. The satellite is the size of a 12 unit cubesat, and dissipates slightly more than 75 Watts of power as heat at times. The maximum temperatures for several components are above their maximum operational limit for one or more cases. The analysis done for the first Design and Analysis Cycle (DAC1) showed that many components were above or within 5 degrees Centigrade of their maximum operation limit. The battery is a component of concern because although it is not over its operational temperature limit, efficiency greatly decreases if it operates at the currently predicted temperatures. In the second Design and Analysis Cycle (DAC2), many steps were taken to mitigate the overheating of components, including isolating several high temperature components, removal of components, and rearrangement of systems. These changes have greatly increased the thermal margin available.
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Disulfide oil hazard assessment using categorical analysis and a mode of action determination.
Morgott, David; Lewis, Christopher; Bootman, James; Banton, Marcy
2014-01-01
Diethyl and diphenyl disulfides, naphtha sweetening (Chemical Abstracts Service [CAS] # 68955-96-4), are primarily composed of low-molecular-weight dialkyl disulfides extracted from C4 to C5 light hydrocarbon streams during the refining of crude oil. The substance, commonly known as disulfide oil (DSO), can be composed of up to 17 different disulfides and trisulfides with monoalkyl chain lengths no greater than C4. The disulfides in DSO constitute a homologous series of chemical constituents that are perfectly suited for a hazard evaluation using a read-across/worst-case approach. The DSO constituents exhibit a common mode of action that is operable at all trophic levels. The observed oxidative stress response is mediated by reactive oxygen species and free radical intermediates generated after disulfide bond cleavage and subsequent redox cycling of the resulting mercaptan. Evidence indicates that the lowest series member, dimethyl disulfide (DMDS), can operate as a worst-case surrogate for other members of the series, since it displays the highest toxicity. Increasing the alkyl chain length or degree of substitution has been shown to serially reduce disulfide toxicity through resonance stabilization of the radical intermediate or steric inhibition of the initial enzymatic step. The following case study examines the mode of action for dialkyl disulfide toxicity and documents the use of read-across information from DMDS to assess the hazards of DSO. The results indicate that DSO possesses high aquatic toxicity, moderate environmental persistence, low to moderate acute toxicity, high repeated dose toxicity, and a low potential for genotoxicity, carcinogenicity, and reproductive/developmental effects.
Design data package and operating procedures for MSFC solar simulator test facility
NASA Technical Reports Server (NTRS)
1981-01-01
Design and operational data for the solar simulator test facility are reviewed. The primary goal of the facility is to evaluate the performance capacibility and worst case failure modes of collectors, which utilize either air or liquid transport media. The facility simulates environmental parameters such as solar radiation intensity, solar spectrum, collimation, uniformity, and solar attitude. The facility also simulates wind conditions of velocity and direction, solar system conditions imposed on the collector, collector fluid inlet temperature, and geometric factors of collector tilt and azimuth angles. Testing the simulator provides collector efficiency data, collector time constant, incident angle modifier data, and stagnation temperature values.
Integrated Optoelectronic Networks for Application-Driven Multicore Computing
2017-05-08
hybrid photonic torus, the all-optical Corona crossbar, and the hybrid hierarchical Firefly crossbar. • The key challenges for waveguide photonics...improves SXR but with relatively higher EDP overhead. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and...photonic crossbar architectures ( Corona and Firefly) indicate that our approach improves worst-case signal-to-noise ratio (SNR) by up to 51.7
Robust Controller Design: A Bounded-Input-Bounded-Output Worst-Case Approach
1992-03-01
show that 2 implies 1, suppose 1 does not hold, i.e., that p(M) > 1. The Perron - Frobenius theory for nonnegative matrices states that p(M) is itself an...Pz denote the positive cones inside X, Z consisting of elements with nonnegative pointwise components. Define the operator .4 : X -* Z, decomposed...topology.) The dual cone P! again consists of the nonnegative elements in Z*. The Lagrangian can be defined as L(x,z ’) {< x,c" > + < Ax - b,z
A novel plant protection strategy for transient reactors
NASA Astrophysics Data System (ADS)
Bhattacharyya, Samit K.; Lipinski, Walter C.; Hanan, Nelson A.
The present plant protection system (PPS) has been defined for use in the TREAT-upgrade (TU) reactor for controlled transient operation of reactor-fuel behavior testing under simulated reactor-accident conditions. A PPS with energy-dependent trip set points lowered worst-case clad temperatures by as much as 180 K, relative to the use of conventional fixed-level trip set points. The multilayered multilevel protection strategy represents the state-of-the-art in terrestrial transient reactor protection systems, and should be applicable to multi-MW space reactors.
50th Annual Fuze Conference Session 5
2006-05-11
level •Underwater Shock NDIA Fuze Conf 2006 5 Warhead Lethality MOFN has two potential warheads EX 183 HE-MOFN •MK 64 PROJECTILE BODY • PBXN - 106 ...EXPLOSIVE FILL EX 184 HE-MOFN •HIFRAG PROJECTILE BODY • PBXN - 106 EXPLOSIVE FILL Warhead lethality effect is fragmentation NDIA Fuze Conf 2006 6 Warhead...NDIA Fuze Conf 2006 19 Min Engagement Hazard • Worst Case Operational Configuration: – Projectile = EX 184 HE-MOFN • MK 64 Projectile w PBXN - 106 fill
2018-02-28
On February 28, SpaceX completed a demonstration of their ability to recover the crew and capsule after a nominal water splashdown. This marks an important recovery milestone and joint test. The timeline requirement from splashdown to crew egress onboard the ship is one hour, and the recovery team demonstrated that they can accomplish this operation under worst-case conditions in under 45 minutes. Further improvements are planned to shorten the recovery time even more as the team works to build a process that is safe, repeatable, and efficient.
Operating Policies for Non- stationary Two-Echelon Inventory Systems for Reparable Items.
1986-05-01
resupply policy. Even under an HCP, we might want to change the resupply policy at management igtervention times to reflect what we predict will happen...management is concerned with the worst performance predicted during the horizon. Regardless of the average performance over the horizon, management may not...locations in DCi(tm-ll tm ) and INi(tm-l tm) . Case 3 a: ASi(tm I ) > ASi(tm); INi(tm- lstm ) empty. Disposals must be made to lover the asset positions
Prioritization for Plastic Surgery Procedures Aimed to Improve Quality of Life: Moral Considerations
Kolby, Lars; Elander, Anna
2017-01-01
Background: Different health conditions are treated in a Plastic Surgery unit, including those cases whose main goal is to enable patients to feel and integrate better within society and therefore improving quality of life, rather then physical functions. Methods: We discuss moral principles that can be used as a guide for health professionals to revise and create policies for plastic surgery patients presenting with non–life-threatening conditions. Results: A specific anatomical feature is not always an indicator of patient’s well-being and quality of life, and therefore it cannot be used as the sole parameter to identify the worst-off and prioritize the provision of health care. A policy should identify who preoperatively are the worst-off and come to some plausible measure of how much they can be expected to benefit from an operation. Policies that do not track these principles in any reliable way can cause discrimination. Conclusions: A patient-centered operating system and patient’s informed preferences might be implemented in the process of prioritizing health. In circumstances when the effectiveness of a specific treatment is unproven, professionals should not make assumptions based on their own values. PMID:28894658
1991-01-01
EXPERIENCE IN DEVELOPING INTEGRATED OPTICAL DEVICES, NONLINEAR MAGNETIC-OPTIC MATERIALS, HIGH FREQUENCY MODULATORS, COMPUTER-AIDED MODELING AND SOPHISTICATED... HIGH -LEVEL PRESENTATION AND DISTRIBUTED CONTROL MODELS FOR INTEGRATING HETEROGENEOUS MECHANICAL ENGINEERING APPLICATIONS AND TOOLS. THE DESIGN IS FOCUSED...STATISTICALLY ACCURATE WORST CASE DEVICE MODELS FOR CIRCUIT SIMULATION. PRESENT METHODS OF WORST CASE DEVICE DESIGN ARE AD HOC AND DO NOT ALLOW THE
Thermal Performance of LANDSAT-7 ETM+ Instruments During First Year in Flight
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2000-01-01
Landsat-7 was successfully launched into orbit on April 15, 1999. After devoting three months to the t bakeout and cool-down of the radiative cooler, and on- t orbit checkout, the Enhanced Thematic Mapper Plus (ETM+) began the normal imaging phase of the mission in mid-July 1999. This paper presents the thermal performance of the ETM+ from mid-July 1999 to mid-May 2000. The flight temperatures are compared to the yellow temperature limits, and worst cold case and worst hot case flight temperature predictions in the 15-orbit mission design profile. The flight temperature predictions were generated by a thermal model, which was correlated to the observatory thermal balance test data. The yellow temperature limits were derived from the flight temperature predictions, plus some margins. The yellow limits work well in flight, so that only several minor changes to them were needed. Overall, the flight temperatures and flight temperature predictions have good agreement. Based on the ETM+ thermal vacuum qualification test, new limits on the imaging time are proposed to increase the average duty cycle, and to resolve the problems experienced by the Mission Operation Team.
Quantum Discord Determines the Interferometric Power of Quantum States
NASA Astrophysics Data System (ADS)
Girolami, Davide; Souza, Alexandre M.; Giovannetti, Vittorio; Tufarelli, Tommaso; Filgueiras, Jefferson G.; Sarthour, Roberto S.; Soares-Pinto, Diogo O.; Oliveira, Ivan S.; Adesso, Gerardo
2014-05-01
Quantum metrology exploits quantum mechanical laws to improve the precision in estimating technologically relevant parameters such as phase, frequency, or magnetic fields. Probe states are usually tailored to the particular dynamics whose parameters are being estimated. Here we consider a novel framework where quantum estimation is performed in an interferometric configuration, using bipartite probe states prepared when only the spectrum of the generating Hamiltonian is known. We introduce a figure of merit for the scheme, given by the worst-case precision over all suitable Hamiltonians, and prove that it amounts exactly to a computable measure of discord-type quantum correlations for the input probe. We complement our theoretical results with a metrology experiment, realized in a highly controllable room-temperature nuclear magnetic resonance setup, which provides a proof-of-concept demonstration for the usefulness of discord in sensing applications. Discordant probes are shown to guarantee a nonzero phase sensitivity for all the chosen generating Hamiltonians, while classically correlated probes are unable to accomplish the estimation in a worst-case setting. This work establishes a rigorous and direct operational interpretation for general quantum correlations, shedding light on their potential for quantum technology.
Local measles vaccination gaps in Germany and the role of vaccination providers.
Eichner, Linda; Wjst, Stephanie; Brockmann, Stefan O; Wolfers, Kerstin; Eichner, Martin
2017-08-14
Measles elimination in Europe is an urgent public health goal, yet despite the efforts of its member states, vaccination gaps and outbreaks occur. This study explores local vaccination heterogeneity in kindergartens and municipalities of a German county. Data on children from mandatory school enrolment examinations in 2014/15 in Reutlingen county were used. Children with unknown vaccination status were either removed from the analysis (best case) or assumed to be unvaccinated (worst case). Vaccination data were translated into expected outbreak probabilities. Physicians and kindergartens with statistically outstanding numbers of under-vaccinated children were identified. A total of 170 (7.1%) of 2388 children did not provide a vaccination certificate; 88.3% (worst case) or 95.1% (best case) were vaccinated at least once against measles. Based on the worst case vaccination coverage, <10% of municipalities and <20% of kindergartens were sufficiently vaccinated to be protected against outbreaks. Excluding children without a vaccination certificate (best case) leads to over-optimistic views: the overall outbreak probability in case of a measles introduction lies between 39.5% (best case) and 73.0% (worst case). Four paediatricians were identified who accounted for 41 of 109 unvaccinated children and for 47 of 138 incomplete vaccinations; GPs showed significantly higher rates of missing vaccination certificates and unvaccinated or under-vaccinated children than paediatricians. Missing vaccination certificates pose a severe problem regarding the interpretability of vaccination data. Although the coverage for at least one measles vaccination is higher in the studied county than in most South German counties and higher than the European average, many severe and potentially dangerous vaccination gaps occur locally. If other federal German states and EU countries show similar vaccination variability, measles elimination may not succeed in Europe.
Robust blood-glucose control using Mathematica.
Kovács, Levente; Paláncz, Béla; Benyó, Balázs; Török, László; Benyó, Zoltán
2006-01-01
A robust control design on frequency domain using Mathematica is presented for regularization of glucose level in type I diabetes persons under intensive care. The method originally proposed under Mathematica by Helton and Merino, --now with an improved disturbance rejection constraint inequality--is employed, using a three-state minimal patient model. The robustness of the resulted high-order linear controller is demonstrated by nonlinear closed loop simulation in state-space, in case of standard meal disturbances and is compared with H infinity design implemented with the mu-toolbox of Matlab. The controller designed with model parameters represented the most favorable plant dynamics from the point of view of control purposes, can operate properly even in case of parameter values of the worst-case scenario.
Worst case estimation of homology design by convex analysis
NASA Technical Reports Server (NTRS)
Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.
1998-01-01
The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.
Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios
NASA Astrophysics Data System (ADS)
Kluess, D.; Mittelmeier, W.; Bader, R.
2009-12-01
In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.
Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios
NASA Astrophysics Data System (ADS)
Kluess, D.; Mittelmeier, W.; Bader, R.
2010-03-01
In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.
Dima, Giovanna; Verzera, Antonella; Grob, Koni
2011-11-01
Party plates made of recycled paperboard with a polyolefin film on the food contact surface (more often polypropylene than polyethylene) were tested for migration of mineral oil into various foods applying reasonable worst case conditions. The worst case was identified as a slice of fried meat placed onto the plate while hot and allowed to cool for 1 h. As it caused the acceptable daily intake (ADI) specified by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) to be exceeded, it is concluded that recycled paperboard is generally acceptable for party plates only when separated from the food by a functional barrier. Migration data obtained with oil as simulant at 70°C was compared to the migration into foods. A contact time of 30 min was found to reasonably cover the worst case determined in food.
Davis, Michael J; Janke, Robert
2018-01-04
The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.
NASA Astrophysics Data System (ADS)
Davis, Michael J.; Janke, Robert
2018-05-01
The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.
Sensitivity of worst-case strom surge considering influence of climate change
NASA Astrophysics Data System (ADS)
Takayabu, Izuru; Hibino, Kenshi; Sasaki, Hidetaka; Shiogama, Hideo; Mori, Nobuhito; Shibutani, Yoko; Takemi, Tetsuya
2016-04-01
There are two standpoints when assessing risk caused by climate change. One is how to prevent disaster. For this purpose, we get probabilistic information of meteorological elements, from enough number of ensemble simulations. Another one is to consider disaster mitigation. For this purpose, we have to use very high resolution sophisticated model to represent a worst case event in detail. If we could use enough computer resources to drive many ensemble runs with very high resolution model, we can handle these all themes in one time. However resources are unfortunately limited in most cases, and we have to select the resolution or the number of simulations if we design the experiment. Applying PGWD (Pseudo Global Warming Downscaling) method is one solution to analyze a worst case event in detail. Here we introduce an example to find climate change influence on the worst case storm-surge, by applying PGWD to a super typhoon Haiyan (Takayabu et al, 2015). 1 km grid WRF model could represent both the intensity and structure of a super typhoon. By adopting PGWD method, we can only estimate the influence of climate change on the development process of the Typhoon. Instead, the changes in genesis could not be estimated. Finally, we drove SU-WAT model (which includes shallow water equation model) to get the signal of storm surge height. The result indicates that the height of the storm surge increased up to 20% owing to these 150 years climate change.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 4, Appendix C
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Statistical analysis data is supplied along with write pulse width, read cycle time, write cycle time, and chip enable time data.
Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.
Probabilistic Models for Solar Particle Events
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.
2009-01-01
Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.
A Multidimensional Assessment of Children in Conflictual Contexts: The Case of Kenya
ERIC Educational Resources Information Center
Okech, Jane E. Atieno
2012-01-01
Children in Kenya's Kisumu District Primary Schools (N = 430) completed three measures of trauma. Respondents completed the "My Worst Experience Scale" (MWES; Hyman and Snook 2002) and its supplement, the "School Alienation and Trauma Survey" (SATS; Hyman and Snook 2002), sharing their worst experiences overall and specifically…
Hot-spot investigations of utility scale panel configurations
NASA Technical Reports Server (NTRS)
Arnett, J. C.; Dally, R. B.; Rumburg, J. P.
1984-01-01
The causes of array faults and efforts to mitigate their effects are examined. Research is concentrated on the panel for the 900 kw second phase of the Sacramento Municipal Utility District (SMUD) project. The panel is designed for hot spot tolerance without comprising efficiency under normal operating conditions. Series/paralleling internal to each module improves tolerance in the power quadrant to cell short or open circuits. Analtyical methods are developed for predicting worst case shade patterns and calculating the resultant cell temperature. Experiments conducted on a prototype panel support the analytical calculations.
DSN command system Mark III-78. [data processing
NASA Technical Reports Server (NTRS)
Stinnett, W. G.
1978-01-01
The Deep Space Network command Mark III-78 data processing system includes a capability for a store-and-forward handling method. The functions of (1) storing the command files at a Deep Space station; (2) attaching the files to a queue; and (3) radiating the commands to the spacecraft are straightforward. However, the total data processing capability is a result of assuming worst case, failure-recovery, or nonnominal operating conditions. Optional data processing functions include: file erase, clearing the queue, suspend radiation, command abort, resume command radiation, and close window time override.
NASA Technical Reports Server (NTRS)
MonteedeGarcia, Kristina; Patel, Jignasha; Perry, Radford, III
2010-01-01
Extremely tight thermal control property degradation allowances on the vapor-deposited, gold-coated IEC baffle surface, made necessary by the cryogenic JWST Observatory operations, dictate tight contamination requirements on adjacent surfaces. Theoretical degradation in emittance with contaminant thickness was calculated. Maximum allowable source outgassing rates were calculated using worst case view factors from source to baffle surface. Tight requirements pushed the team to change the design of the adjacent surfaces to minimize the outgassing sources
NASA Astrophysics Data System (ADS)
Montt de Garcia, Kristina; Patel, Jignasha; Perry, Radford, III
2010-08-01
Extremely tight thermal control property degradation allowances on the vapor-deposited, gold-coated IEC baffle surface, made necessary by the cryogenic JWST Observatory operations, dictate tight contamination requirements on adjacent surfaces. Theoretical degradation in emittance with contaminant thickness was calculated. Maximum allowable source outgassing rates were calculated using worst case view factors from source to baffle surface. Tight requirements pushed the team to change the design of the adjacent surfaces to minimize the outgassing sources.
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.
Rosende, María; Magalhães, Luis M; Segundo, Marcela A; Miró, Manuel
2014-09-09
A novel biomimetic extraction procedure that allows for the in-line handing of ≥400 mg solid substrates is herein proposed for automatic ascertainment of trace element (TE) bioaccessibility in soils under worst-case conditions as per recommendations of ISO norms. A unified bioaccessibility/BARGE method (UBM)-like physiological-based extraction test is evaluated for the first time in a dynamic format for accurate assessment of in-vitro bioaccessibility of Cr, Cu, Ni, Pb and Zn in forest and residential-garden soils by on-line coupling of a hybrid flow set-up to inductively coupled plasma atomic emission spectrometry. Three biologically relevant operational extraction modes mimicking: (i) gastric juice extraction alone; (ii) saliva and gastric juice composite in unidirectional flow extraction format and (iii) saliva and gastric juice composite in a recirculation mode were thoroughly investigated. The extraction profiles of the three configurations using digestive fluids were proven to fit a first order reaction kinetic model for estimating the maximum TE bioaccessibility, that is, the actual worst-case scenario in human risk assessment protocols. A full factorial design, in which the sample amount (400-800 mg), the extractant flow rate (0.5-1.5 mL min(-1)) and the extraction temperature (27-37°C) were selected as variables for the multivariate optimization studies in order to obtain the maximum TE extractability. Two soils of varied physicochemical properties were analysed and no significant differences were found at the 0.05 significance level between the summation of leached concentrations of TE in gastric juice plus the residual fraction and the total concentration of the overall assayed metals determined by microwave digestion. These results showed the reliability and lack of bias (trueness) of the automatic biomimetic extraction approach using digestive juices. Copyright © 2014 Elsevier B.V. All rights reserved.
Mühlbacher, Axel C; Kaczynski, Anika; Zweifel, Peter; Johnson, F Reed
2016-12-01
Best-worst scaling (BWS), also known as maximum-difference scaling, is a multiattribute approach to measuring preferences. BWS aims at the analysis of preferences regarding a set of attributes, their levels or alternatives. It is a stated-preference method based on the assumption that respondents are capable of making judgments regarding the best and the worst (or the most and least important, respectively) out of three or more elements of a choice-set. As is true of discrete choice experiments (DCE) generally, BWS avoids the known weaknesses of rating and ranking scales while holding the promise of generating additional information by making respondents choose twice, namely the best as well as the worst criteria. A systematic literature review found 53 BWS applications in health and healthcare. This article expounds possibilities of application, the underlying theoretical concepts and the implementation of BWS in its three variants: 'object case', 'profile case', 'multiprofile case'. This paper contains a survey of BWS methods and revolves around study design, experimental design, and data analysis. Moreover the article discusses the strengths and weaknesses of the three types of BWS distinguished and offered an outlook. A companion paper focuses on special issues of theory and statistical inference confronting BWS in preference measurement.
Tutorial on Actual Space Environmental Hazards For Space Systems (Invited)
NASA Astrophysics Data System (ADS)
Mazur, J. E.; Fennell, J. F.; Guild, T. B.; O'Brien, T. P.
2013-12-01
It has become common in the space science community to conduct research on diverse physical phenomena because they are thought to contribute to space weather. However, satellites contend with only three primary environmental hazards: single event effects, vehicle charging, and total dose, and not every physical phenomenon that occurs in space contributes in substantial ways to create these hazards. One consequence of the mismatch between actual threats and all-encompassing research is the often-described gap between research and operations; another is the creation of forecasts that provide no actionable information for design engineers or spacecraft operators. An example of the latter is the physics of magnetic field emergence on the Sun; the phenomenon is relevant to the formation and launch of coronal mass ejections and is also causally related to the solar energetic particles that may get accelerated in the interplanetary shock. Unfortunately for the research community, the engineering community mitigates the space weather threat (single-event effects from heavy ions above ~50 MeV/nucleon) with a worst-case specification of the environment and not with a prediction. Worst-case definition requires data mining of past events, while predictions involve large-scale systems science from the Sun to the Earth that is compelling for scientists and their funding agencies but not actionable for design or for most operations. Differing priorities among different space-faring organizations only compounds the confusion over what science research is relevant. Solar particle impacts to human crew arise mainly from the total ionizing dose from the solar protons, so the priority for prediction in the human spaceflight community is therefore much different than in the unmanned satellite community, while both communities refer to the fundamental phenomenon as space weather. Our goal in this paper is the presentation of a brief tutorial on the primary space environmental phenomena that are relevant to satellite design and operations. The tutorial will help space science researchers to understand the differing priorities of communities that operate in space and to better distinguish the science that is actually needed for the design and operation of all-weather space systems.
NASA Astrophysics Data System (ADS)
Dobson, B.; Pianosi, F.; Reed, P. M.; Wagener, T.
2017-12-01
In previous work, we have found that water supply companies are typically hesitant to use reservoir operation tools to inform their release decisions. We believe that this is, in part, due to a lack of faith in the fidelity of the optimization exercise with regards to its ability to represent the real world. In an attempt to quantify this, recent literature has studied the impact on performance from uncertainty arising in: forcing (e.g. reservoir inflows), parameters (e.g. parameters for the estimation of evaporation rate) and objectives (e.g. worst first percentile or worst case). We suggest that there is also epistemic uncertainty in the choices made during model creation, for example in the formulation of an evaporation model or aggregating regional storages. We create `rival framings' (a methodology originally developed to demonstrate the impact of uncertainty arising from alternate objective formulations), each with different modelling choices, and determine their performance impacts. We identify the Pareto approximate set of policies for several candidate formulations and then make them compete with one another in a large ensemble re-evaluation in each other's modelled spaces. This enables us to distinguish the impacts of different structural changes in the model used to evaluate system performance in an effort to generalize the validity of the optimized performance expectations.
Effect of Impact Location on the Response of Shuttle Wing Leading Edge Panel 9
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Spellman, Regina L.; Hardy, Robin C.; Fasanella, Edwin L.; Jackson, Karen E.
2005-01-01
The objective of this paper is to compare the results of several simulations performed to determine the worst-case location for a foam impact on the Space Shuttle wing leading edge. The simulations were performed using the commercial non-linear transient dynamic finite element code, LS-DYNA. These simulations represent the first in a series of parametric studies performed to support the selection of the worst-case impact scenario. Panel 9 was selected for this study to enable comparisons with previous simulations performed during the Columbia Accident Investigation. The projectile for this study is a 5.5-in cube of typical external tank foam weighing 0.23 lb. Seven locations spanning the panel surface were impacted with the foam cube. For each of these cases, the foam was traveling at 1000 ft/s directly aft, along the orbiter X-axis. Results compared from the parametric studies included strains, contact forces, and material energies for various simulations. The results show that the worst case impact location was on the top surface, near the apex.
Faith, Daniel P.
2015-01-01
The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, E. A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, Edward A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Alfa, M J; Olson, N
2016-05-01
To determine which simulated-use test soils met the worst-case organic levels and viscosity of clinical secretions, and had the best adhesive characteristics. Levels of protein, carbohydrate and haemoglobin, and vibrational viscosity of clinical endoscope secretions were compared with test soils including ATS, ATS2015, Edinburgh, Edinburgh-M (modified), Miles, 10% serum and coagulated whole blood. ASTM D3359 was used for adhesion testing. Cleaning of a single-channel flexible intubation endoscope was tested after simulated use. The worst-case levels of protein, carbohydrate and haemoglobin, and viscosity of clinical material were 219,828μg/mL, 9296μg/mL, 9562μg/mL and 6cP, respectively. Whole blood, ATS2015 and Edinburgh-M were pipettable with viscosities of 3.4cP, 9.0cP and 11.9cP, respectively. ATS2015 and Edinburgh-M best matched the worst-case clinical parameters, but ATS had the best adhesion with 7% removal (36.7% for Edinburgh-M). Edinburgh-M and ATS2015 showed similar soiling and removal characteristics from the surface and lumen of a flexible intubation endoscope. Of the test soils evaluated, ATS2015 and Edinburgh-M were found to be good choices for the simulated use of endoscopes, as their composition and viscosity most closely matched worst-case clinical material. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 0.1 to 3.1 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour at the left abutment ranged from 10.4 to 12.5 ft with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 25.3 to 27.3 ft with the worst-case occurring at the incipient-overtopping discharge. The worst-case total scour also occurred at the incipient-overtopping discharge. The incipient-overtopping discharge was in between the 100- and 500-year discharges. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Hollis, Geoff
2018-04-01
Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.
NASA Technical Reports Server (NTRS)
Simon, M. K.; Polydoros, A.
1981-01-01
This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.
1986-03-31
requirements necessary to optimize BAS/DCS operation in worst case environments . 4) Identify the qualitative and quantitative values of equipment which... Defibrillator 2.3 2.3 265 1.0 265 2 Sink unit, surgici1 17.0 34.0 3910 0.1 390 1 Resuscitator - inhaler 0.9 0.9 104 0.5 52 2 Sterilizer, surgical 10.1...transferred, the driving force for transfer is the difference in dry bulb temperatures. During heat transfer between unsaturated air and a wetted
NASA Technical Reports Server (NTRS)
Stokes, R. L.
1979-01-01
Electrical characterization tests were performed on two different manufactured types of integrated circuits. The devices were subjected to functional and AC and DC parametric tests at ambient temperatures of -55 C, -20 C, 25 C, 85 C, and 125 C. The data were analyzed and tabulated to show the effect of operating conditions on performance and to indicate parameter deviations among devices in each group. Accuracy was given precedence over test time efficiency where practical, and tests were designed to measure worst case performance.
[Malignant tumors of the female genital track in the elderly].
Gottwald, Leszek; Akoel, Kindah Mo; Wójcik-Krowiranda, Katarzyna; Bieńkiewicz, Andrzej
2003-09-01
In senium the increase in the incidence of most malignant neoplasms, as well as gynecological cancers is found. In this period of life the vast number of women do not apply for the preventive and follow-up examinations, which increases the number of malignant diseases diagnosed at advanced clinical stages. The coexisting another diseases often limits the possibility of the operative treatment in those cases. To assess the profile of malignant tumors of the genital tract and their treatment in women above 70 year old. 61 women aged from 71 yrs. to 88 yrs. treated operatively between 1997-2001 due to gynecological cancers were included into the study. The structure and detectability of the neoplasms, as well as the type of performed surgical procedures were analysed. 30 endometrial cancers (49.2%), 16 ovarian cancers (26.2%), 14 vulvar cancers (22.9%) and 1 cervical cancer were diagnosed and surgically treated. The endometrial cancer stage I was detected in 18 cases, stage II in 4 cases and stage III in 8 cases. In each case the radical operation was done (total hysterectomy, lymphadenectomy and appendectomy). The ovarian cancer stage I was detected in 3 cases, stage II in 2 cases, stage III in 5 cases, and stage IV in 6 cases. Only in 5 cases out of this group the radical surgery was performed (total hysterectomy, omentectomy and appendectomy). The vulvar cancer stage I was detected in 2 cases, stage II in 11 cases, and FIGO stage III in 4 cases. In each of these women the vulva and bilateral inguinal lymph nodes were resected, and in 2 cases additionally at the same time the Miles operation was performed. The cervical cancer clinical stage I was detected, and the Wertheim operation was performed. The most often diagnosed malignant neoplasm in women above 70 yrs. was the endometrial cancer. The worst first-time diagnosis structure was observed in the ovarian cancer, what significantly decreased the ability of surgical treatment in this group.
Optimal Analyses for 3×n AB Games in the Worst Case
NASA Astrophysics Data System (ADS)
Huang, Li-Te; Lin, Shun-Shii
The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.
Resilient Grid Operational Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasqualini, Donatella
Extreme weather-related disturbances, such as hurricanes, are a leading cause of grid outages historically. Although physical asset hardening is perhaps the most common way to mitigate the impacts of severe weather, operational strategies may be deployed to limit the extent of societal and economic losses associated with weather-related physical damage.1 The purpose of this study is to examine bulk power-system operational strategies that can be deployed to mitigate the impact of severe weather disruptions caused by hurricanes, thereby increasing grid resilience to maintain continuity of critical infrastructure during extreme weather. To estimate the impacts of resilient grid operational strategies, Losmore » Alamos National Laboratory (LANL) developed a framework for hurricane probabilistic risk analysis (PRA). The probabilistic nature of this framework allows us to estimate the probability distribution of likely impacts, as opposed to the worst-case impacts. The project scope does not include strategies that are not operations related, such as transmission system hardening (e.g., undergrounding, transmission tower reinforcement and substation flood protection) and solutions in the distribution network.« less
How can health systems research reach the worst-off? A conceptual exploration.
Pratt, Bridget; Hyder, Adnan A
2016-11-15
Health systems research is increasingly being conducted in low and middle-income countries (LMICs). Such research should aim to reduce health disparities between and within countries as a matter of global justice. For such research to do so, ethical guidance that is consistent with egalitarian theories of social justice proposes it ought to (amongst other things) focus on worst-off countries and research populations. Yet who constitutes the worst-off is not well-defined. By applying existing work on disadvantage from political philosophy, the paper demonstrates that (at least) two options exist for how to define the worst-off upon whom equity-oriented health systems research should focus: those who are worst-off in terms of health or those who are systematically disadvantaged. The paper describes in detail how both concepts can be understood and what metrics can be relied upon to identify worst-off countries and research populations at the sub-national level (groups, communities). To demonstrate how each can be used, the paper considers two real-world cases of health systems research and whether their choice of country (Uganda, India) and research population in 2011 would have been classified as amongst the worst-off according to the proposed concepts. The two proposed concepts can classify different countries and sub-national populations as worst-off. It is recommended that health researchers (or other actors) should use the concept that best reflects their moral commitments-namely, to perform research focused on reducing health inequalities or systematic disadvantage more broadly. If addressing the latter, it is recommended that they rely on the multidimensional poverty approach rather than the income approach to identify worst-off populations.
Faith, Daniel P
2015-02-19
The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Haji Hosseinloo, Ashkan; Turitsyn, Konstantin
2016-04-01
Vibration energy harvesting has been shown as a promising power source for many small-scale applications mainly because of the considerable reduction in the energy consumption of the electronics and scalability issues of the conventional batteries. However, energy harvesters may not be as robust as the conventional batteries and their performance could drastically deteriorate in the presence of uncertainty in their parameters. Hence, study of uncertainty propagation and optimization under uncertainty is essential for proper and robust performance of harvesters in practice. While all studies have focused on expectation optimization, we propose a new and more practical optimization perspective; optimization for the worst-case (minimum) power. We formulate the problem in a generic fashion and as a simple example apply it to a linear piezoelectric energy harvester. We study the effect of parametric uncertainty in its natural frequency, load resistance, and electromechanical coupling coefficient on its worst-case power and then optimize for it under different confidence levels. The results show that there is a significant improvement in the worst-case power of thus designed harvester compared to that of a naively-optimized (deterministically-optimized) harvester.
Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.
Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133
SBLOCA outside containment at Browns Ferry Unit One: accident sequence analysis. [Small break
DOE Office of Scientific and Technical Information (OSTI.GOV)
Condon, W.A.; Harrington, R.M.; Greene, S.R.
1982-11-01
This study describes the predicted response of Unit 1 at the Browns Ferry Nuclear Plant to a postulated small-break loss-of-coolant accident outside of the primary containment. The break has been assumed to occur in the scram discharge volume piping immediately following a reactor scram that cannot be reset. The events before core uncovering are discussed for both the worst-case accident sequence without operator action and for the more likely sequences with operator action. Without operator action, the events after core uncovering would include core meltdown and subsequent containment failure, and this event sequence has been determined through use of themore » MARCH code. An estimate of the magnitude and timing of the concomitant release of the noble gas, cesium, and iodine-based fission products to the environment is provided in Volume 2 of this report.« less
Comparison of in-situ delay monitors for use in Adaptive Voltage Scaling
NASA Astrophysics Data System (ADS)
Pour Aryan, N.; Heiß, L.; Schmitt-Landsiedel, D.; Georgakos, G.; Wirnshofer, M.
2012-09-01
In Adaptive Voltage Scaling (AVS) the supply voltage of digital circuits is tuned according to the circuit's actual operating condition, which enables dynamic compensation to PVTA variations. By exploiting the excessive safety margins added in state-of-the-art worst-case designs considerable power saving is achieved. In our approach, the operating condition of the circuit is monitored by in-situ delay monitors. This paper presents different designs to implement the in-situ delay monitors capable of detecting late but still non-erroneous transitions, called Pre-Errors. The developed Pre-Error monitors are integrated in a 16 bit multiplier test circuit and the resulting Pre-Error AVS system is modeled by a Markov chain in order to determine the power saving potential of each Pre-Error detection approach.
Ollson, Christopher A; Whitfield Aslund, Melissa L; Knopper, Loren D; Dan, Tereza
2014-01-01
The regions of Durham and York in Ontario, Canada have partnered to construct an energy-from-waste (EFW) thermal treatment facility as part of a long term strategy for the management of their municipal solid waste. In this paper we present the results of a comprehensive ecological risk assessment (ERA) for this planned facility, based on baseline sampling and site specific modeling to predict facility-related emissions, which was subsequently accepted by regulatory authorities. Emissions were estimated for both the approved initial operating design capacity of the facility (140,000 tonnes per year) and the maximum design capacity (400,000 tonnes per year). In general, calculated ecological hazard quotients (EHQs) and screening ratios (SRs) for receptors did not exceed the benchmark value (1.0). The only exceedances noted were generally due to existing baseline media concentrations, which did not differ from those expected for similar unimpacted sites in Ontario. This suggests that these exceedances reflect conservative assumptions applied in the risk assessment rather than actual potential risk. However, under predicted upset conditions at 400,000 tonnes per year (i.e., facility start-up, shutdown, and loss of air pollution control), a potential unacceptable risk was estimated for freshwater receptors with respect to benzo(g,h,i)perylene (SR=1.1), which could not be attributed to baseline conditions. Although this slight exceedance reflects a conservative worst-case scenario (upset conditions coinciding with worst-case meteorological conditions), further investigation of potential ecological risk should be performed if this facility is expanded to the maximum operating capacity in the future. © 2013.
Pulmonary function tests correlated with thoracic volumes in adolescent idiopathic scoliosis.
Ledonio, Charles Gerald T; Rosenstein, Benjamin E; Johnston, Charles E; Regelmann, Warren E; Nuckley, David J; Polly, David W
2017-01-01
Scoliosis deformity has been linked with deleterious changes in the thoracic cavity that affect pulmonary function. The causal relationship between spinal deformity and pulmonary function has yet to be fully defined. It has been hypothesized that deformity correction improves pulmonary function by restoring both respiratory muscle efficiency and increasing the space available to the lungs. This research aims to correlate pulmonary function and thoracic volume before and after scoliosis correction. Retrospective correlational analysis between thoracic volume modeling from plain x-rays and pulmonary function tests was conducted. Adolescent idiopathic scoliosis patients enrolled in a multicenter database were sorted by pre-operative Total Lung Capacities (TLC) % predicted values from their Pulmonary Function Tests (PFT). Ten patients with the best and ten patients with the worst TLC values were included. Modeled thoracic volume and TLC values were compared before and 2 years after surgery. Scoliosis correction resulted in an increase in the thoracic volume for patients with the worst initial TLCs (11.7%) and those with the best initial TLCs (12.5%). The adolescents with the most severe pulmonary restriction prior to surgery strongly correlated with post-operative change in total lung capacity and thoracic volume (r 2 = 0.839; p < 0.001). The mean increase in thoracic volume in this group was 373.1 cm 3 (11.7%) which correlated with a 21.2% improvement in TLC. Scoliosis correction in adolescents was found to increase thoracic volume and is strongly correlated with improved TLC in cases with severe restrictive pulmonary function, but no correlation was found in cases with normal pulmonary function. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:175-182, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Kuijpers, Laura Maria Francisca; Maltha, Jessica; Guiraud, Issa; Kaboré, Bérenger; Lompo, Palpouguini; Devlieger, Hugo; Van Geet, Chris; Tinto, Halidou; Jacobs, Jan
2016-06-02
Plasmodium falciparum infection may cause severe anaemia, particularly in children. When planning a diagnostic study on children suspected of severe malaria in sub-Saharan Africa, it was questioned how much blood could be safely sampled; intended blood volumes (blood cultures and EDTA blood) were 6 mL (children aged <6 years) and 10 mL (6-12 years). A previous review [Bull World Health Organ. 89: 46-53. 2011] recommended not to exceed 3.8 % of total blood volume (TBV). In a simulation exercise using data of children previously enrolled in a study about severe malaria and bacteraemia in Burkina Faso, the impact of this 3.8 % safety guideline was evaluated. For a total of 666 children aged >2 months to <12 years, data of age, weight and haemoglobin value (Hb) were available. For each child, the estimated TBV (TBVe) (mL) was calculated by multiplying the body weight (kg) by the factor 80 (ml/kg). Next, TBVe was corrected for the degree of anaemia to obtain the functional TBV (TBVf). The correction factor consisted of the rate 'Hb of the child divided by the reference Hb'; both the lowest ('best case') and highest ('worst case') reference Hb values were used. Next, the exact volume that a 3.8 % proportion of this TBVf would present was calculated and this volume was compared to the blood volumes that were intended to be sampled. When applied to the Burkina Faso cohort, the simulation exercise pointed out that in 5.3 % (best case) and 11.4 % (worst case) of children the blood volume intended to be sampled would exceed the volume as defined by the 3.8 % safety guideline. Highest proportions would be in the age groups 2-6 months (19.0 %; worst scenario) and 6 months-2 years (15.7 %; worst case scenario). A positive rapid diagnostic test for P. falciparum was associated with an increased risk of violating the safety guideline in the worst case scenario (p = 0.016). Blood sampling in children for research in P. falciparum endemic settings may easily violate the proposed safety guideline when applied to TBVf. Ethical committees and researchers should be wary of this and take appropriate precautions.
Discussions On Worst-Case Test Condition For Single Event Burnout
NASA Astrophysics Data System (ADS)
Liu, Sandra; Zafrani, Max; Sherman, Phillip
2011-10-01
This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.
NASA Astrophysics Data System (ADS)
Grimes, T. F.; Hagen, A. R.; Archambault, B. C.; Taleyarkhan, R. P.
2018-03-01
This paper describes the development of a SNM detection system for interrogating 1m3 cargos via the combination of a D-D neutron interrogation source (with and without reflectors) and tensioned metastable fluid detectors (TMFDs). TMFDs have been previously shown (Taleyarkhan et al., 2008; Grimes et al., 2015; Grimes and Taleyarkhan, 2016; Archambault et al., 2017; Hagen et al., 2016) to be capable of using Threshold Energy Neutron Analysis (TENA) techniques to reject the ∼2.45 MeV D-D interrogating neutrons while still remaining sensitive to >2.45 MeV neutrons resulting from fission in the target (HEU) material. In order to enhance the performance, a paraffin reflector was included around the accelerator head. This reflector was used to direct neutrons into the package to increase the fission signal, lower the energy of the interrogating neutrons to increase the fission cross-section with HEU, and, also to direct interrogating neutrons away from the detectors in order to enhance the required discrimination between interrogating and fission neutrons. Experiments performed with a 239 Pu-Be neutron source and MnO2 indicated that impressive performance gains could be made by placing a parabolic paraffin moderator between the interrogation source and an air-filled cargo container with HEU placed at the center. However, experiments with other cargo fillers (as specified in the well-known ANSI N42.41-2007 report), and with HEU placed in locations other than the center of the package indicated that other reflector geometries might be superior due to over-"focusing" and the increased solid angle effects due to the accommodation of the moderator geometry. The best performance for the worst case of source location and box fill was obtained by placing the reflector only behind the D-D neutron source rather than in front of it. Finally, it was shown that there could be significant gains in the ability to detect concealed SNM by operating the system in multiple geometric configurations. Worst case scenarios were created by filling the box with hydrogenous material and placing the HEU as far away as possible from the neutron source. The performance of the system in the worst-case scenarios were greatly improved by exchanging the location of the accelerator and the opposite TMFD panel half way through interrogation. Using this operation, scenarios with positions of the concealed SNM that were once the most challenging to successfully detect became readily detectable.
Great Plains Project: at worst a $1. 7 billion squeeze
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maize, K.
1983-04-11
On January 29, 1982, seeking a loan guarantee for its coal-to-gas synfuels project, Great Plains Gasification Associates told the Department of Energy that they expected to reap $1.2 billion in net income to the partnership during the first 10 years of the venture. On March 31, 1983, Great Plains treasurer Rodney Boulanger had a different projection: a horrific loss of $773 million in the first decade. The Great Plains project, with construction 50% complete, is being built near Beulah, ND. The project has a design capacity of 137.5 million cubic feet a day of SNG. Great Plains' analysis assumes thatmore » the plant will operate at 70% of design capacity in 1985, 77% in 1986, 84% in 1987 and 91% thereafter. The company projects the total project cost at $2.1 billion, consisting of plant costs of $1.9 billion and coal mine costs of $156 million. In originally projecting a cumulative net income of better than $1 billion, the partners anticipated running losses in only three of the first 10 years, and cash distributions from the project of $893 million during the first decade. Under the new projections, even in the best case, the first four years would show losses and there would be no distribution to the partners. In the worst case, the project would run in the red every year for the first 10 years.« less
NASA Astrophysics Data System (ADS)
Cha, J.; Ryu, J.; Lee, M.; Song, C.; Cho, Y.; Schumacher, P.; Mah, M.; Kim, D.
Conjunction prediction is one of the critical operations in space situational awareness (SSA). For geospace objects, common algorithms for conjunction prediction are usually based on all-pairwise check, spatial hash, or kd-tree. Computational load is usually reduced through some filters. However, there exists a good chance of missing potential collisions between space objects. We present a novel algorithm which both guarantees no missing conjunction and is efficient to answer to a variety of spatial queries including pairwise conjunction prediction. The algorithm takes only O(k log N) time for N objects in the worst case to answer conjunctions where k is a constant which is linear to prediction time length. The proposed algorithm, named DVD-COOP (Dynamic Voronoi Diagram-based Conjunctive Orbital Object Predictor), is based on the dynamic Voronoi diagram of moving spherical balls in 3D space. The algorithm has a preprocessing which consists of two steps: The construction of an initial Voronoi diagram (taking O(N) time on average) and the construction of a priority queue for the events of topology changes in the Voronoi diagram (taking O(N log N) time in the worst case). The scalability of the proposed algorithm is also discussed. We hope that the proposed Voronoi-approach will change the computational paradigm in spatial reasoning among space objects.
A lock-free priority queue design based on multi-dimensional linked lists
Dechev, Damian; Zhang, Deli
2015-04-03
The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less
A lock-free priority queue design based on multi-dimensional linked lists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dechev, Damian; Zhang, Deli
The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less
Busch, Martin H J; Vollmann, Wolfgang; Grönemeyer, Dietrich H W
2006-05-26
Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach (1/4) of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q V(ind) < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q V(ind) > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.
Busch, Martin HJ; Vollmann, Wolfgang; Grönemeyer, Dietrich HW
2006-01-01
Background Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach ¼ of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. Methods First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. Results The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. Conclusion The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q Vind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for. PMID:16729878
Moore, Spencer; Eng, Eugenia; Daniel, Mark
2003-12-01
In February 2000, Mozambique suffered its worst flooding in almost 50 years: 699 people died and hundreds of thousands were displaced. Over 49 countries and 30 international non-governmental organisations provided humanitarian assistance. Coordination of disaster assistance is critical for effective humanitarian aid operations, but limited attention has been directed toward evaluating the system-wide structure of inter-organisational coordination during humanitarian operations. Network analysis methods were used to examine the structure of inter-organisational relations among 65 non-governmental organisations (NGOs) involved in the flood operations in Mozambique. Centrality scores were used to estimate NGO-specific potential for aid coordination and tested against NGO beneficiary numbers. The average number of relief- and recovery-period beneficiaries was significantly greater for NGOs with high relative to low centrality scores (p < 0.05). This report addresses the significance of these findings in the context of the Mozambican 2000 floods and the type of data required to evaluate system-wide coordination.
PO*WW*ER mobile treatment unit process hazards analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, R.B.
1996-06-01
The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damagemore » and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.« less
Jow, Uei-Ming; Ghovanloo, Maysam
2012-12-21
We present a design methodology for an overlapping hexagonal planar spiral coil (hex-PSC) array, optimized for creation of a homogenous magnetic field for wireless power transmission to randomly moving objects. The modular hex-PSC array has been implemented in the form of three parallel conductive layers, for which an iterative optimization procedure defines the PSC geometries. Since the overlapping hex-PSCs in different layers have different characteristics, the worst case coil-coupling condition should be designed to provide the maximum power transfer efficiency (PTE) in order to minimize the spatial received power fluctuations. In the worst case, the transmitter (Tx) hex-PSC is overlapped by six PSCs and surrounded by six other adjacent PSCs. Using a receiver (Rx) coil, 20 mm in radius, at the coupling distance of 78 mm and maximum lateral misalignment of 49.1 mm (1/√3 of the PSC radius) we can receive power at a PTE of 19.6% from the worst case PSC. Furthermore, we have studied the effects of Rx coil tilting and concluded that the PTE degrades significantly when θ > 60°. Solutions are: 1) activating two adjacent overlapping hex-PSCs simultaneously with out-of-phase excitations to create horizontal magnetic flux and 2) inclusion of a small energy storage element in the Rx module to maintain power in the worst case scenarios. In order to verify the proposed design methodology, we have developed the EnerCage system, which aims to power up biological instruments attached to or implanted in freely behaving small animal subjects' bodies in long-term electrophysiology experiments within large experimental arenas.
Failed State 2030: Nigeria - A Case Study
2011-02-01
disastrous ecological conditions in its Niger Delta region, and is fighting one of the modern world?s worst legacies of political and economic corruption. A ...world’s worst legacies of political and economic corruption. A nation with more than 350 ethnic groups, 250 languages, and three distinct religious...happening in the world. The discus- sion herein is a mix of cultural sociology, political science, econom - ics, military science (sometimes called
The 25 kW resonant dc/dc power converter
NASA Technical Reports Server (NTRS)
Robson, R. R.
1983-01-01
The feasibility of processing 25-kW of power with a single, transistorized, series resonant converter stage was demonstrated by the successful design, development, fabrication, and testing of such a device which employs four Westinghouse D7ST transistors in a full-bridge configuration and operates from a 250-to-350 Vdc input bus. The unit has an overall worst-case efficiency of 93.5% at its full rated output of 1000 V and 25 A dc. A solid-state dc input circuit breaker and output-transient-current limiters are included in and integrated into the design. Full circuit details of the converter are presented along with the test data.
NASA Technical Reports Server (NTRS)
Koontz, Steven L.; Boeder, Paul A.; Pankop, Courtney; Reddell, Brandon
2005-01-01
The role of structural shielding mass in the design, verification, and in-flight performance of International Space Station (ISS), in both the natural and induced orbital ionizing radiation (IR) environments, is reported. Detailed consideration of the effects of both the natural and induced ionizing radiation environment during ISS design, development, and flight operations has produced a safe, efficient manned space platform that is largely immune to deleterious effects of the LEO ionizing radiation environment. The assumption of a small shielding mass for purposes of design and verification has been shown to be a valid worst-case approximation approach to design for reliability, though predicted dependences of single event effect (SEE) effects on latitude, longitude, SEP events, and spacecraft structural shielding mass are not observed. The Figure of Merit (FOM) method over predicts the rate for median shielding masses of about 10g/cm(exp 2) by only a factor of 3, while the Scott Effective Flux Approach (SEFA) method overestimated by about one order of magnitude as expected. The Integral Rectangular Parallelepiped (IRPP), SEFA, and FOM methods for estimating on-orbit (Single Event Upsets) SEU rates all utilize some version of the CREME-96 treatment of energetic particle interaction with structural shielding, which has been shown to underestimate the production of secondary particles in heavily shielded manned spacecraft. The need for more work directed to development of a practical understanding of secondary particle production in massive structural shielding for SEE design and verification is indicated. In contrast, total dose estimates using CAD based shielding mass distributions functions and the Shieldose Code provided a reasonable accurate estimate of accumulated dose in Grays internal to the ISS pressurized elements, albeit as a result of using worst-on-worst case assumptions (500 km altitude x 2) that compensate for ignoring both GCR and secondary particle production in massive structural shielding.
38th Annual Maintenance & Operations Cost Study for Schools
ERIC Educational Resources Information Center
Agron, Joe
2009-01-01
Despite the worst economic environment in generations, spending by K-12 institutions on maintenance and operations (M&O) held its own--defying historical trends that have shown M&O spending among the most affected in times of budget tightening. This article presents data from the 38th annual Maintenance & Operations Cost Study for…
2012-04-30
DoD SERC Aeronautics & Astronautics 5/16/2012 NPS 9th Annual Acquisition Research Symposium...0.6 0.7 0.8 0.9 1 0 60 120 180 240 300 360 420 480 540 600 Pr ob ab ili ty to c om pl et e a m is si on Time (mins) architecture 1 architecture 2...1 6 11 /1 6 12 /1 6 13 /1 6 14 /1 6 15 /1 6 1Pr ob ab ili ty to c om pl et e a m is si on % of system failures worst-case in arch1 worst-case in
Fine-Scale Structure Design for 3D Printing
NASA Astrophysics Data System (ADS)
Panetta, Francis Julian
Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahi-Anwar, M; Young, S; Lo, P
Purpose: A method to discriminate different types of renal cell carcinoma (RCC) was developed using attenuation values observed in multiphasic contrast-enhanced CT. This work evaluates the sensitivity of this RCC discrimination task at different CT radiation dose levels. Methods: We selected 5 cases of kidney lesion patients who had undergone four-phase CT scans covering the abdomen to the lilac crest. Through an IRB-approved study, the scans were conducted on 64-slice CT scanners (Definition AS/Definition Flash, Siemens Healthcare) using automatic tube-current modulation (TCM). The protocol included an initial baseline unenhanced scan, followed by three post-contrast injection phases. CTDIvol (32 cm phantom)more » measured between 9 to 35 mGy for any given phase. As a preliminary study, we limited the scope to the cortico-medullary phase—shown previously to be the most discriminative phase. A previously validated method was used to simulate a reduced dose acquisition via adding noise to raw CT sinogram data, emulating corresponding images at simulated doses of 50%, 25%, and 10%. To discriminate the lesion subtype, ROIs were placed in the most enhancing region of the lesion. The mean HU value of an ROI was extracted and used to discriminate to the worst-case RCC subtype, ranked in the order of clear cell, papillary, chromophobe and the benign oncocytoma. Results: Two patients exhibited a change of worst case RCC subtype between original and simulated scans, at 25% and 10% doses. In one case, the worst-case RCC subtype changed from oncocytoma to chromophobe at 10% and 25% doses, while the other case changed from oncocytoma to clear cell at 10% dose. Conclusion: Based on preliminary results from an initial cohort of 5 patients, worst-case RCC subtypes remained constant at all simulated dose levels except for 2 patients. Further study conducted on more patients will be needed to confirm our findings. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant Support from: U01 CA181156.« less
Complications and results of subdural grid electrode implantation in epilepsy surgery.
Lee, W S; Lee, J K; Lee, S A; Kang, J K; Ko, T S
2000-11-01
We assessed the risk of delayed subdural hematoma and other complications associated with subdural grid implantation. Forty-nine patients underwent subdural grid implantation with/without subdural strips or depth electrodes from January 1994 to August 1998. To identify the risk associated with subdural grid implantation, a retrospective review of all patients' medical records and radiological studies was performed. The major complications of 50 subdural grid electrode implantations were as follows: four cases (7.8%) of delayed subdural hematoma at the site of the subdural grid, requiring emergency operation; two cases (3.9%) of infection; one case (2.0%) of epidural hematoma; and one case (2.0%) of brain swelling. After subdural hematoma removal, the electrodes were left in place. CCTV monitoring and cortical stimulation studies were continued thereafter. No delayed subdural hematoma has occurred since routine placement of subdural drains was begun. In our experience the worst complication of subdural grid implantation has been delayed subdural hematoma. Placement of subdural drains and close observation may be helpful to prevent this serious complication.
Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino
2017-03-01
Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Yang, Tsong-Shing; Chi, Ching-Chi; Wang, Shu-Hui; Lin, Jing-Chi; Lin, Ko-Ming
2016-10-01
Biologic therapies are more effective but more costly than conventional therapies in treating psoriatic arthritis. To evaluate the cost-efficacy of etanercept, adalimumab and golimumab therapies in treating active psoriatic arthritis in a Taiwanese setting. We conducted a meta-analysis of randomized placebo-controlled trials to calculate the incremental efficacy of etanercept, adalimumab and golimumab, respectively, in achieving Psoriatic Arthritis Response Criteria (PsARC) and a 20% improvement in the American College of Rheumatology score (ACR20). The base, best, and worst case incremental cost-effectiveness ratios (ICERs) for one subject to achieve PsARC and ACR20 were calculated. The annual ICER per PsARC responder were US$27 047 (best scenario US$16 619; worst scenario US$31 350), US$39 339 (best scenario US$31 846; worst scenario US$53 501) and US$27 085 (best scenario US$22 716; worst scenario US$33 534) for etanercept, adalimumab and golimumab, respectively. The annual ICER per ACR20 responder were US$27 588 (best scenario US$20 900; worst scenario US$41 800), US$39 339 (best scenario US$25 236; worst scenario US$83 595) and US$33 534 (best scenario US$27 616; worst scenario US$44 013) for etanercept, adalimumab and golimumab, respectively. In a Taiwanese setting, etanercept had the lowest annual costs per PsARC and ACR20 responder, while adalimumab had the highest annual costs per PsARC and ACR responder. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.
Bartnicka, Joanna; Zietkiewicz, Agnieszka A; Kowalski, Grzegorz J
2016-08-01
A comparison of 1-port, 2-port, 3-port, and 4-port laparoscopic cholecystectomy techniques from the point of view of workflow criteria was made to both identify specific workflow components that can cause surgical disturbances and indicate good and bad practices. As a case study, laparoscopic cholecystectomies, including manual tasks and interactions within teamwork members, were video-recorded and analyzed on the basis of specially encoded workflow information. The parameters for comparison were defined as follows: surgery time, tool and hand activeness, operator's passive work, collisions, and operator interventions. It was found that 1-port cholecystectomy is the worst technique because of nonergonomic body position, technical complexity, organizational anomalies, and operational dynamism. The differences between laparoscopic techniques are closely linked to the costs of the medical procedures. Hence, knowledge about the surgical workflow can be used for both planning surgical procedures and balancing the expenses associated with surgery.
NASA Technical Reports Server (NTRS)
Hawkins, J. E.
1980-01-01
A 0.15 scale model of a proposed conformal variable-ramp inlet for the Multirole Fighter was tested from Mach 0.8 to 2.2 at a wide range of angles of attack and sideslip. Inlet ramp angle was varied to optimize ramp angle as a function of engine airflow, Mach number, angle of attack, and angle of sideslip. Several inlet configuration options were investigated to study their effects on inlet operation and to establish the final flight configuration. These variations were cowl sidewall cutback, cowl lip bluntness, boundary layer bleed, and first-ramp leading edge shape. Diagnostic and engine face instrumentation were used to evaluate inlet operation at various inlet stations and at the inlet/engine interface. Pressure recovery and stability of the inlet were satisfactory for the proposed application. On the basis of an engine stability audit of the worst-case instantaneous distortion patterns, no inlet/engine compatibility problems are expected for normal operations.
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
Solid State Clipper Diodes for High Power Modulators.
1978-11-01
modeled at low powers and later confirmed in actua l P W pulsar operation. 0~ \\ ~~~~~~~~~ . ~~~~~ .. . .— - - I. ~~~~~ 3 J~ItV~ . \\ W \\_ UNC l ASSIFIE...and CG is the di ide api-i tance to 1avg — Ip ~ j- ground . In our design the worst case diode leakage (I 2( lO ~C) was 15 milliamperes (mA) at I kV...without it. I2rms 1p 2 ~~ ( 4) the diode junction capacitance and stray l’nns — 5 x lO ~ A 2 capacitance affect the voltage division whenever the
Designing a 25-kilowatt high frequency series resonant
NASA Technical Reports Server (NTRS)
Robson, R. R.
1984-01-01
The feasibility of processing 25 kW of power with a single, transistorized, 20 kHz, series resonant converter stage has been demonstrated by the successful design, development, fabrication, and testing of such a device. It employs four Westinghouse D7ST transistors in a full-bridge configuration and operates from a 250-to-350-Vdc input bus. The unit has an overall worst-case efficiency of 93.5% at its full rated output of 1000 V and 25 A dc. A solid-state dc input circuit breaker and output-transient-current limiters are included in and integrated into the design. Circuit details of the converter are presented along with test data.
Boehmler, Erick M.; Severance, Timothy
1997-01-01
Contraction scour for all modelled flows ranged from 3.8 to 6.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.0 to 6.7 ft. The worst-case abutment scour also occurred at the 500-year discharge. Pier scour ranged from 9.1 to 10.2. The worst-case pier scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.; Hammond, Robert E.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.9 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour at the left abutment ranged from 3.1 to 10.3 ft. with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 6.4 to 10.4 ft. with the worst-case occurring at the 100-year discharge.Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Medalie, Laura
1997-01-01
Contraction scour for the modelled flows ranged from 1.0 to 2.7 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour ranged from 8.4 to 17.6 ft. The worst-case abutment scour for the right abutment occurred at the incipient-overtopping discharge. For the left abutment, the worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, R.L.; Medalie, Laura
1998-01-01
Contraction scour for all modelled flows ranged from 0.0 to 2.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 6.7 to 8.7 ft. The worst-case left abutment scour occurred at the incipient roadway-overtopping discharge. Right abutment scour ranged from 7.8 to 9.5 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and Davis, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Isolator fragmentation and explosive initiation tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, Peter; Rae, Philip John; Foley, Timothy J.
2016-09-19
Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without a barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX 9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimatesmore » demonstrate that even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less
Isolator fragmentation and explosive initiation tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, Peter; Rae, Philip John; Foley, Timothy J.
2015-09-30
Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimates demonstrate thatmore » even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less
Quadriceps tendon rupture - treatment results.
Popov, Iva; Ristić, Vladimir; Maljanović, Mirsad; Milankov, Vukadin
2013-01-01
Quadriceps tendon rupture is a rare but rather serious injury. If this injury is not promptly recognized and early operated, it may lead to disability. This research was aimed at pointing out the results and complications of the quadriceps tendon rupture surgical treatment. This retrospective multicentric study was conducted in a group of 29 patients (mostly elderly men). Lysholm knee scoring scale was used to evaluate the surgical results. The post-operative results were compared in relation to the type of tendon rupture reconstructions (acute or chronic), various surgical techniques, type of injuries (unilateral or bilateral) as well as the presence or absence of comorbid risk factors in the patients. The average value of a Lysholm score was 87.6. Excellent and satisfactory Lysholm score results dominated in our sample of patients. Better post-operative results were recorded in the group of patients without risk factors, in case of a bilateral injury, and in case of an acute injury. The best result was obtained after performing the reconstruction using anchors, and the worst result came after using Codivilla technique. Early diagnosis and surgical treatment are an absolute imperative in management of this injury. We have not proven that a certain surgical technique has an advantage over the others. A comorbid risk factor is related to a lower Lysholm score. Despite a few cases of complications, we can conclude that the surgical treatment yields satisfactory results.
Steady State Thermal Analyses of SCEPTOR X-57 Wingtip Propulsion
NASA Technical Reports Server (NTRS)
Schnulo, Sydney L.; Chin, Jeffrey C.; Smith, Andrew D.; Dubois, Arthur
2017-01-01
Electric aircraft concepts enable advanced propulsion airframe integration approaches that promise increased efficiency as well as reduced emissions and noise. NASA's fully electric Maxwell X-57, developed under the SCEPTOR program, features distributed propulsion across a high aspect ratio wing. There are 14 propulsors in all: 12 high lift motor that are only active during take off and climb, and 2 larger motors positioned on the wingtips that operate over the entire mission. The power electronics involved in the wingtip propulsion are temperature sensitive and therefore require thermal management. This work focuses on the high and low fidelity heat transfer analysis methods performed to ensure that the wingtip motor inverters do not reach their temperature limits. It also explores different geometry configurations involved in the X-57 development and any thermal concerns. All analyses presented are performed at steady state under stressful operating conditions, therefore predicting temperatures which are considered the worst-case scenario to remain conservative.
A VLSI implementation of DCT using pass transistor technology
NASA Technical Reports Server (NTRS)
Kamath, S.; Lynn, Douglas; Whitaker, Sterling
1992-01-01
A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.
Near-Earth Object Astrometric Interferometry
NASA Technical Reports Server (NTRS)
Werner, Martin R.
2005-01-01
Using astrometric interferometry on near-Earth objects (NEOs) poses many interesting and difficult challenges. Poor reflectance properties and potentially no significant active emissions lead to NEOs having intrinsically low visual magnitudes. Using worst case estimates for signal reflection properties leads to NEOs having visual magnitudes of 27 and higher. Today the most sensitive interferometers in operation have limiting magnitudes of 20 or less. The main reason for this limit is due to the atmosphere, where turbulence affects the light coming from the target, limiting the sensitivity of the interferometer. In this analysis, the interferometer designs assume no atmosphere, meaning they would be placed at a location somewhere in space. Interferometer configurations and operational uncertainties are looked at in order to parameterize the requirements necessary to achieve measurements of low visual magnitude NEOs. This analysis provides a preliminary estimate of what will be required in order to take high resolution measurements of these objects using interferometry techniques.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Sorting on STAR. [CDC computer algorithm timing comparison
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Stabilisation and humanitarian access in a collapsed state: the Somali case.
Menkhaus, Ken
2010-10-01
Somalia today is the site of three major threats: the world's worst humanitarian crisis; the longest-running instance of complete state collapse; and a robust jihadist movement with links to Al-Qa'ida. External state-building, counter-terrorism and humanitarian policies responding to these threats have worked at cross-purposes. State-building efforts that insist humanitarian relief be channelled through the nascent state in order to build its legitimacy and capacity undermine humanitarian neutrality when the state is a party to a civil war. Counter-terrorism policies that seek to ensure that no aid benefits terrorist groups have the net effect of criminalising relief operations in countries where poor security precludes effective accountability. This paper argues that tensions between stabilisation and humanitarian goals in contemporary Somalia reflect a long history of politicisation of humanitarian operations in the country. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.
Sørensen, Peter B; Thomsen, Marianne; Assmuth, Timo; Grieger, Khara D; Baun, Anders
2010-08-15
This paper helps bridge the gap between scientists and other stakeholders in the areas of human and environmental risk management of chemicals and engineered nanomaterials. This connection is needed due to the evolution of stakeholder awareness and scientific progress related to human and environmental health which involves complex methodological demands on risk management. At the same time, the available scientific knowledge is also becoming more scattered across multiple scientific disciplines. Hence, the understanding of potentially risky situations is increasingly multifaceted, which again challenges risk assessors in terms of giving the 'right' relative priority to the multitude of contributing risk factors. A critical issue is therefore to develop procedures that can identify and evaluate worst case risk conditions which may be input to risk level predictions. Therefore, this paper suggests a conceptual modelling procedure that is able to define appropriate worst case conditions in complex risk management. The result of the analysis is an assembly of system models, denoted the Worst Case Definition (WCD) model, to set up and evaluate the conditions of multi-dimensional risk identification and risk quantification. The model can help optimize risk assessment planning by initial screening level analyses and guiding quantitative assessment in relation to knowledge needs for better decision support concerning environmental and human health protection or risk reduction. The WCD model facilitates the evaluation of fundamental uncertainty using knowledge mapping principles and techniques in a way that can improve a complete uncertainty analysis. Ultimately, the WCD is applicable for describing risk contributing factors in relation to many different types of risk management problems since it transparently and effectively handles assumptions and definitions and allows the integration of different forms of knowledge, thereby supporting the inclusion of multifaceted risk components in cumulative risk management. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
Managing risk in a challenging financial environment.
Kaufman, Kenneth
2008-08-01
Five strategies can help hospital financial leaders balance their organizations' financial and risk positions: Understand the hospital's financial condition; Determine the desired level of risk; Consider total risk; Use a portfolio approach; Explore best-case/worst-case scenarios to measure risk.
A Fully Coupled Multi-Rigid-Body Fuel Slosh Dynamics Model Applied to the Triana Stack
NASA Technical Reports Server (NTRS)
London, K. W.
2001-01-01
A somewhat general multibody model is presented that accounts for energy dissipation associated with fuel slosh and which unifies some of the existing more specialized representations. This model is used to predict the mutation growth time constant for the Triana Spacecraft, or Stack, consisting of the Triana Observatory mated with the Gyroscopic Upper Stage of GUS (includes the solid rocket motor, SRM, booster). At the nominal spin rate of 60 rpm and with 145 kg of hydrazine propellant on board, a time constant of 116 s is predicted for worst case sloshing of a spherical slug model compared to 1,681 s (nominal), 1,043 s (worst case) for sloshing of a three degree of freedom pendulum model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Schild, S; Bues, M
Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less
Adverse weather impact on aviation safety, investigation and oversight
NASA Technical Reports Server (NTRS)
Smith, M. J.
1985-01-01
A brief review of the weather factors that effect aviation safety with respect to U.S. Coast Guard operations is presented. Precise meteorological information is an absolute necessity to the Coast Guard which must conduct life saving and rescue operations under the worst of weather conditions. Many times the weather conditions in which they operate are the cause of or a contributing factor to the predicament from which they must execute a rescue operation.
Estimated cost of universal public coverage of prescription drugs in Canada
Morgan, Steven G.; Law, Michael; Daw, Jamie R.; Abraham, Liza; Martin, Danielle
2015-01-01
Background: With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. Methods: We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Results: Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. Interpretation: The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. PMID:25780047
Estimated cost of universal public coverage of prescription drugs in Canada.
Morgan, Steven G; Law, Michael; Daw, Jamie R; Abraham, Liza; Martin, Danielle
2015-04-21
With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. © 2015 Canadian Medical Association or its licensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, Henry
This research was mostly concerned with asymmetric vertical displacement event (AVDE) disruptions, which are the worst case scenario for producing a large asymmetric wall force. This is potentially a serious problem in ITER.
Farrar, John T; Pritchett, Yili L; Robinson, Michael; Prakash, Apurva; Chappell, Amy
2010-02-01
Data on 1,700 patients pooled from 5 randomized, placebo-controlled duloxetine studies (3 in diabetic peripheral neuropathic pain and 2 in fibromyalgia) were analyzed to determine clinically important differences (CIDs) in the 0 to 10 Numeric Rating Scale-Pain Intensity (NRS-PI) for patient-reported "worst" and "least" pain intensity while validating the previously published level for "average" pain. The correspondence between the baseline-to-endpoint raw and percentage change in the NRS-PI for the worst, least, and average pain were compared to patients' perceived improvements at endpoint as measured by the 7-point Patient Global Impression of Improvement (PGI-I) scales. Stratification by baseline pain separated the raw but not the percent change scores. The PGI-I category of "much better" or above was our a priori definition of a CID. Cutoff points for the NRS-PI change scores were determined using a receiver operator curve analysis. A consistent relationship between the worst and average NRS-PI percent change and the PGI-I was demonstrated regardless of the study, pain type, age, sex, or treatment group with a reduction of approximately 34%. The least pain item CID was slightly higher at 41%. Raw change CID cutoff points were approximately -2, -2.5 and -3 for least, average, and worst pain respectively. We determined an anchor-based value for the change in the worst, least, and average pain intensity items of the Brief Pain Inventory that best represents a clinically important difference. Our findings support a standard definition of a clinically important difference in clinical trials of chronic-pain therapies. Copyright 2010 American Pain Society. Published by Elsevier Inc. All rights reserved.
Multiple usage of the CD PLUS/UNIX system: performance in practice.
Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A
1995-01-01
In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable. PMID:8547902
Multiple usage of the CD PLUS/UNIX system: performance in practice.
Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A
1995-10-01
In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable.
Carter, D A; Hirst, I L
2000-01-07
This paper considers the application of one of the weighted risk indicators used by the Major Hazards Assessment Unit (MHAU) of the Health and Safety Executive (HSE) in formulating advice to local planning authorities on the siting of new major accident hazard installations. In such cases the primary consideration is to ensure that the proposed installation would not be incompatible with existing developments in the vicinity, as identified by the categorisation of the existing developments and the estimation of individual risk values at those developments. In addition a simple methodology, described here, based on MHAU's "Risk Integral" and a single "worst case" even analysis, is used to enable the societal risk aspects of the hazardous installation to be considered at an early stage of the proposal, and to determine the degree of analysis that will be necessary to enable HSE to give appropriate advice.
Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J
2013-04-16
In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.
40 CFR 90.119 - Certification procedure-testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... must select the duty cycle that will result in worst-case emission results for certification. For any... facility, in which case instrumentation and equipment specified by the Administrator must be made available... manufacturers may not use any equipment, instruments, or tools to identify malfunctioning, maladjusted, or...
Ivanoff, Michael A.
1997-01-01
Contraction scour for all modelled flows ranged from 2.1 to 4.2 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 14.3 to 14.4 ft. The worst-case left abutment scour occurred at the incipient roadwayovertopping and 500-year discharge. Right abutment scour ranged from 15.3 to 18.5 ft. The worst-case right abutment scour occurred at the 100-year and the incipient roadwayovertopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) give “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.
Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-09-18
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.
MP3 player listening sound pressure levels among 10 to 17 year old students.
Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M
2011-11-01
Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤ 75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System
Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-01-01
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019
Boehmler, Erick M.; Weber, Matthew A.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.3 ft. The worst-case contraction scour occurred at the incipient overtopping discharge, which was less than the 100-year discharge. Abutment scour ranged from 6.2 to 9.4 ft. The worst-case abutment scour for the right abutment was 9.4 feet at the 100-year discharge. The worst-case abutment scour for the left abutment was 8.6 feet at the incipient overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Degnan, James R.
1997-01-01
Contraction scour for all modelled flows ranged from 2.6 to 4.6 ft. The worst-case contraction scour occurred at the incipient roadway-overtopping discharge. The left abutment scour ranged from 11.6 to 12.1 ft. The worst-case left abutment scour occurred at the incipient road-overtopping discharge. The right abutment scour ranged from 13.6 to 17.9 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Bergschmidt, Philipp; Dammer, Rebecca; Zietz, Carmen; Finze, Susanne; Mittelmeier, Wolfram; Bader, Rainer
2016-06-01
Evaluation of the adhesive strength of femoral components to the bone cement is a relevant parameter for predicting implant safety. In the present experimental study, three types of cemented femoral components (metallic, ceramic and silica/silane-layered ceramic) of the bicondylar Multigen Plus knee system, implanted on composite femora were analysed. A pull-off test with the femoral components was performed after different load and several cementing conditions (four groups and n=3 components of each metallic, ceramic and silica/silane-layered ceramic in each group). Pull-off forces were comparable for the metallic and the silica/silane-layered ceramic femoral components (mean 4769 N and 4298 N) under standard test condition, whereas uncoated ceramic femoral components showed reduced pull-off forces (mean 2322 N). Loading under worst-case conditions led to decreased adhesive strength by loosening of the interface implant and bone cement using uncoated metallic and ceramic femoral components, respectively. Silica/silane-coated ceramic components were stably fixed even under worst-case conditions. Loading under high flexion angles can induce interfacial tensile stress, which could promote early implant loosening. In conclusion, a silica/silane-coating layer on the femoral component increased their adhesive strength to bone cement. Thicker cement mantles (>2 mm) reduce adhesive strength of the femoral component and can increase the risk of cement break-off.
Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome.
Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack
2016-05-10
Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an 'overall' or 'worst' GS in biopsies series should be used. Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Using both 'worst' and 'overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the 'worst' grade is a valid prognostic measure.
NASA Technical Reports Server (NTRS)
Lindsey, J. F.
1976-01-01
The isolation between the upper S-band quad antenna and the S-band payload antenna on the shuttle orbiter is calculated using a combination of plane surface and curved surface theories along with worst case values. A minimum value of 60 db isolation is predicted based on recent antenna pattern data, antenna locations on the orbiter, curvature effects, dielectric covering effects and edge effects of the payload bay. The calculated value of 60 db is significantly greater than the baseline value of 40 db. Use of the new value will result in the design of smaller, lighter weight and less expensive filters for S-band transponder and the S-band payload interrogator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.
1987-01-01
Analysis results for multiple steam generator blow down caused by an auxiliary feedwater steam-line break performed with the RETRAN-02 MOD 003 computer code are presented to demonstrate the capabilities of the RETRAN code to predict system transient response for verifying changes in operational procedures and supporting plant equipment modifications. A typical four-loop Westinghouse pressurized water reactor was modeled using best-estimate versus worst case licensing assumptions. This paper presents analyses performed to evaluate the necessity of implementing an auxiliary feedwater steam-line isolation modification. RETRAN transient analysis can be used to determine core cooling capability response, departure from nucleate boiling ratio (DNBR)more » status, and reactor trip signal actuation times.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunter, B.J.
Environmental and breathing zone samples were analyzed for ethylene-oxide at United Hospital, Grand Forks, North Dakota in January, 1985. The survey was requested by the management to determine if using ethylene-oxide for sterilization purposes posed a health risk. All employees (number not specified) in the central supply department were interviewed. These concentrations originated from an old sterilizer. The sterilizer was not normally used, but was operated on the day of the survey to stimulate a worst-case situation. None of the workers had any medical complaints. The author concludes that a health hazard due to ethylene-oxide does not exist at themore » facility. He recommends not using the old sterilizer until it has been refurbished and conducting periodic monitoring for ethylene/oxide with an infrared analyzer.« less
NASA Technical Reports Server (NTRS)
Spratlin, Kenneth Milton
1987-01-01
An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.
Multiple object tracking using the shortest path faster association algorithm.
Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.
Multiple Object Tracking Using the Shortest Path Faster Association Algorithm
Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322
Stochastic Robust Mathematical Programming Model for Power System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less
NASA Technical Reports Server (NTRS)
Coakley, P.; Kitterer, B.; Treadaway, M.
1982-01-01
Charging and discharging characteristics of dielectric samples exposed to 1-25 keV and 25-100 keV electrons in a laboratory environment are reported. The materials examined comprised OSR, Mylar, Kapton, perforated Kapton, and Alphaquartz, serving as models for materials employed on spacecraft in geosynchronous orbit. The tests were performed in a vacuum chamber with electron guns whose beams were rastered over the entire surface of the planar samples. The specimens were examined in low-impedance-grounded, high-impedance-grounded, and isolated configurations. The worst-case and average peak discharge currents were observed to be independent of the incident electron energy, the time-dependent changes in the worst case discharge peak current were independent of the energy, and predischarge surface potentials are negligibly dependent on incident monoenergetic electrons.
Worst-case space radiation environments for geocentric missions
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Seltzer, S. M.
1976-01-01
Worst-case possible annual radiation fluences of energetic charged particles in the terrestrial space environment, and the resultant depth-dose distributions in aluminum, were calculated in order to establish absolute upper limits to the radiation exposure of spacecraft in geocentric orbits. The results are a concise set of data intended to aid in the determination of the feasibility of a particular mission. The data may further serve as guidelines in the evaluation of standard spacecraft components. Calculations were performed for each significant particle species populating or visiting the magnetosphere, on the basis of volume occupied by or accessible to the respective species. Thus, magnetospheric space was divided into five distinct regions using the magnetic shell parameter L, which gives the approximate geocentric distance (in earth radii) of a field line's equatorial intersect.
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
Hepatitis Aand E Co-Infection with Worst Outcome.
Saeed, Anjum; Cheema, Huma Arshad; Assiri, Asaad
2016-06-01
Infections are still a major problem in the developing countries like Pakistan because of poor sewage disposal and economic restraints. Acute viral hepatitis like Aand E are not uncommon in pediatric age group because of unhygienic food handling and poor sewage disposal, but majority recovers well without any complications. Co-infections are rare occurrences and physicians need to be well aware while managing such conditions to avoid worst outcome. Co-infection with hepatitis Aand E is reported occasionally in the literature, however, other concurrent infections such as hepatitis A with Salmonellaand hepatotropic viruses like viral hepatitis B and C are present in the literature. Co-infections should be kept in consideration when someone presents with atypical symptoms or unusual disease course like this presented case. We report here a girl child who had acute hepatitis A and E concurrent infections and presented with hepatic encephalopathy and had worst outcome, despite all the supportive measures being taken.
Implementation of School Health Promotion: Consequences for Professional Assistance
ERIC Educational Resources Information Center
Boot, N. M. W. M.; de Vries, N. K.
2012-01-01
Purpose: This case study aimed to examine the factors influencing the implementation of health promotion (HP) policies and programs in secondary schools and the consequences for professional assistance. Design/methodology/approach: Group interviews were held in two schools that represented the best and worst case of implementation of a health…
Compression in the Superintendent Ranks
ERIC Educational Resources Information Center
Saron, Bradford G.; Birchbauer, Louis J.
2011-01-01
Sadly, the fiscal condition of school systems now not only is troublesome, but in some cases has surpassed all expectations for the worst-case scenario. Among the states, one common response is to drop funding for public education to inadequate levels, leading to permanent program cuts, school closures, staff layoffs, district dissolutions and…
Operating in the space plasma environment: A spacecraft charging study of the Solar X-ray Imager
NASA Technical Reports Server (NTRS)
Herr, Joel L.; Mccollum, Matthew B.; James, Bonnie F.
1994-01-01
This study presents the results of a spacecraft charging effects protection study conducted on the Solar X-ray Imager (SXI). The SXI is being developed by NASA Marshall Space Flight Center for NOAA's Space Environment Laboratory, and will be used to aid in forecasting energetic particle events and geomagnetic storms. Images will provide information on the intensity and location of solar flares, coronal mass ejections, and high speed solar streams. The SXI will be flown on a next-generation GOES sometime in the mid to late 1990's. Charging due to the encounter with a worst-case magnetic substorm environment is modeled using the NASCAP/GEO computer code. Charging levels of exterior surfaces and the floating potential of the spacecraft relative to plasma are determined as a function of spacecraft design, operational configuration, and orbital conditions. Areas where large surface voltage gradients exist on or near the SXI are identified as possible arc-discharge sites. Results of the charging analysis are then used to develop design recommendations that will limit the effects of spacecraft charging on the SXI operation.
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2013 CFR
2013-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2012 CFR
2012-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2014 CFR
2014-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
Code of Federal Regulations, 2011 CFR
2011-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2012 CFR
2012-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2013 CFR
2013-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2014 CFR
2014-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2010 CFR
2010-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Adaptive Attitude Control of the Crew Launch Vehicle
NASA Technical Reports Server (NTRS)
Muse, Jonathan
2010-01-01
An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.
Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome
Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack
2016-01-01
Background: Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an ‘overall' or ‘worst' GS in biopsies series should be used. Methods: Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Results: Using both ‘worst' and ‘overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. Conclusions: This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the ‘worst' grade is a valid prognostic measure. PMID:27100731
You can use this free software program to complete the Off-site Consequence Analyses (both worst case scenarios and alternative scenarios) required under the Risk Management Program rule, so that you don't have to do calculations by hand.
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, J.V. III; Cramer, S.N.; Knight, J.R.
1980-09-01
Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less
NASA Technical Reports Server (NTRS)
Olson, S. L.
2004-01-01
NASA's current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.
NASA Technical Reports Server (NTRS)
Olson, S. L.
2004-01-01
NASA s current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1[1]). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.
NASA Technical Reports Server (NTRS)
Lind, Richard C. (Inventor); Brenner, Martin J.
2001-01-01
A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.
NASA Technical Reports Server (NTRS)
Holladay, Jon; Day, Greg; Gill, Larry
2004-01-01
Spacecraft are typically designed with a primary focus on weight in order to meet launch vehicle performance parameters. However, for pressurized and/or man-rated spacecraft, it is also necessary to have an understanding of the vehicle operating environments to properly size the pressure vessel. Proper sizing of the pressure vessel requires an understanding of the space vehicle's life cycle and compares the physical design optimization (weight and launch "cost") to downstream operational complexity and total life cycle cost. This paper will provide an overview of some major environmental design drivers and provide examples for calculating the optimal design pressure versus a selected set of design parameters related to thermal and environmental perspectives. In addition, this paper will provide a generic set of cracking pressures for both positive and negative pressure relief valves that encompasses worst case environmental effects for a variety of launch / landing sites. Finally, several examples are included to highlight pressure relief set points and vehicle weight impacts for a selected set of orbital missions.
Quantum one-way permutation over the finite field of two elements
NASA Astrophysics Data System (ADS)
de Castro, Alexandre
2017-06-01
In quantum cryptography, a one-way permutation is a bounded unitary operator U:{H} → {H} on a Hilbert space {H} that is easy to compute on every input, but hard to invert given the image of a random input. Levin (Probl Inf Transm 39(1):92-103, 2003) has conjectured that the unitary transformation g(a,x)=(a,f(x)+ax), where f is any length-preserving function and a,x \\in {GF}_{{2}^{\\Vert x\\Vert }}, is an information-theoretically secure operator within a polynomial factor. Here, we show that Levin's one-way permutation is provably secure because its output values are four maximally entangled two-qubit states, and whose probability of factoring them approaches zero faster than the multiplicative inverse of any positive polynomial poly( x) over the Boolean ring of all subsets of x. Our results demonstrate through well-known theorems that existence of classical one-way functions implies existence of a universal quantum one-way permutation that cannot be inverted in subexponential time in the worst case.
Investigation of structural behavior of candidate Space Station structure
NASA Technical Reports Server (NTRS)
Hedgepeth, John M.; Miller, Richard K.
1989-01-01
Quantitative evaluations of the structural loads, stiffness and deflections of an example Space Station truss due to a variety of influences, including manufacturing tolerances, assembly operations, and operational loading are reported. The example truss is a dual-keel design composed of 5-meter-cube modules. The truss is 21 modules high and 9 modules wide, with a transverse beam 15 modules long. One problem of concern is the amount of mismatch which will be expected when the truss is being erected on orbit. Worst-case thermal loading results in less than 0.5 inch of mismatch. The stiffness of the interface is shown to be less than 100 pounds per inch. Thus, only moderate loads will be required to overcome the mismatch. The problem of manufacturing imperfections is analyzed by the Monte Carlo approach. Deformations and internal loads are obtained for ensembles of 100 example trusses. All analyses are performed on a personal computer. The necessary routines required to supplement commercially available programs are described.
An Alaskan Theater Airlift Model.
1982-02-19
overt attack on American soil . In any case, such a reaotion represents the worst-case scenario In that theater forces would be denied the advantages of...NNSETNTAFE,SS(l06), USL (100), 7 TNET,THOV,1X(100) REAL A,CHKTIN INTEGER ORIC,DEST,ISCTMP,WXFLG,ALLW,T(RT,ZPTR,ZONE, * FTNFLG.WX,ZONLST(150) DATA ZNSI
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-15
... Service (NPS) for the Florida leafwing and the pine rockland ecosystem, in general. Sea Level Rise... habitat. In the best case scenario, which assumes low sea level rise, high financial resources, proactive... human population. In the worst case scenario, which assumes high sea level rise, low financial resources...
A Different Call to Arms: Women in the Core of the Communications Revolution.
ERIC Educational Resources Information Center
Rush, Ramona R.
A "best case" model for the role of women in the postindustrial communications era predicts positive leadership roles based on the preindustrial work characteristics of cooperation and consensus. A "worst case" model finds women entrepreneurs succumbing to the competitive male ethos and extracting the maximum amount of work…
Updated model assessment of pollution at major U. S. Airports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamartino, R.J.; Rote, D.M.
1979-02-01
The air quality impact of aircraft at and around Los Angeles International Airport (LAX) is simulated for hours of peak aircraft operation and worst case pollutant dispersion conditions. An updated version of the Argonne Airport Vicinity Air Pollution (AVAP) model is used in the simulation; model refinements reflect new theoretical formulations and data from field programs at LAX, O'Hare, and John F. Kennedy International Airports. Maximum carbon monoxide concentrations at LAX are found to be low relative to the NAAQS. Relatively high, widespread hydrocarbon levels indicate that aircraft emissions may aggravate oxidant problems near the airport. Concentrations of oxides ofmore » nitrogen are high enough relative to proposed standards to warrant further study. Similar modeling is underway for the O'Hare and JFK airports.« less
Design and development of a ceramic radial turbine for the AGT101
NASA Technical Reports Server (NTRS)
Finger, D. G.; Gupta, S. K.
1982-01-01
An acceptable and feasible ceramic turbine wheel design has been achieved, and the relevant temperature, stress, and success probability analyses are discussed. The design is described, the materials selection presented, and the engine cycle conditions analysis parameters shown. Measured MOR four-point strengths are indicated for room and elevated temperatures, and engine conditions are analyzed for various cycle states, materials, power states, turbine inlet temperatures, and speeds. An advanced gas turbine ceramic turbine rotor thermal and stress model is developed, and cumulative probability of survival is shown for first and third-year properties of SiC and Si3N4 rotors under different operating conditions, computed for both blade and hub regions. Temperature and stress distributions for steady-state and worst-case shutdown transients are depicted.
Wind and turbine characteristics needed for integration of wind turbine arrays into a utility system
NASA Technical Reports Server (NTRS)
Park, G. L.
1982-01-01
Wind data and wind turbine generator (WTG) performance characteristics are often available in a form inconvenient for use by utility planners and engineers. The steps used by utility planners are summarized and the type of wind and WTG data needed for integration of WTG arrays suggested. These included long term yearly velocity averages for preliminary site feasibility, hourly velocities on a 'wind season' basis for more detailed economic analysis and for reliability studies, worst-case velocity profiles for gusts, and various minute-to-hourly velocity profiles for estimating the effect of longer-term wind fluctuations on utility operations. wind turbine data needed includes electrical properties of the generator, startup and shutdown characteristics, protection characteristics, pitch control response and control strategy, and electro-mechanical model for stability analysis.
NASA Astrophysics Data System (ADS)
Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.
2017-06-01
Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.
Applying MDA to SDR for Space to Model Real-time Issues
NASA Technical Reports Server (NTRS)
Blaser, Tammy M.
2007-01-01
NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.
Telemetry data storage systems technology for the Space Station Freedom era
NASA Technical Reports Server (NTRS)
Dalton, John T.
1989-01-01
This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.
Intelligent Mobile Technologies
NASA Technical Reports Server (NTRS)
Alena, Rick; Gilbaugh, Bruce; Glass, Brian; Swanson, Keith (Technical Monitor)
2000-01-01
Testing involves commercial radio equipment approved for export and use in Canada. Testing was conducted in the Canadian High Arctic, where hilly terrain provided the worst-case testing. SFU and Canadian governmental agencies made significant technical contributions. The only technical data related to radio testing was exchanged with SFU. Test protocols are standard radio tests performed by communication technicians worldwide. The Joint Fields Operations objectives included the following: (1) to provide Internet communications services for field science work and mobile exploration systems; (2) to evaluate the range and throughput of three different medium-range radio link technologies for providing coverage of the crater area; and (3) to demonstrate collaborative software such as NetMeeting with multi-point video for exchange of scientific information between remote node and base-base camp and science centers as part of communications testing.
RMP Guidance for Offsite Consequence Analysis
Offsite consequence analysis (OCA) consists of a worst-case release scenario and alternative release scenarios. OCA is required from facilities with chemicals above threshold quantities. RMP*Comp software can be used to perform calculations described here.
Detection of MAVs (Micro Aerial Vehicles) based on millimeter wave radar
NASA Astrophysics Data System (ADS)
Noetel, Denis; Johannes, Winfried; Caris, Michael; Hommes, Alexander; Stanko, Stephan
2016-10-01
In this paper we present two system approaches for perimeter surveillance with radar techniques focused on the detection of Micro Aerial Vehicles (MAVs). The main task of such radars is to detect movements of targets such as an individual or a vehicle approaching a facility. The systems typically cover a range of several hundred meters up to several kilometers. In particular, the capability of identifying Remotely Piloted Aircraft Systems (RPAS), which pose a growing threat on critical infrastructure areas, is of great importance nowadays. The low costs, the ease of handling and a considerable payload make them an excellent tool for unwanted surveillance or attacks. Most platforms can be equipped with all kind of sensors or, in the worst case, with destructive devices. A typical MAV is able to take off and land vertically, to hover, and in many cases to fly forward at high speed. Thus, it can reach all kinds of places in short time while the concealed operator of the MAV resides at a remote and riskless place.
Trial of a slant visual range measuring device
NASA Technical Reports Server (NTRS)
Streicher, J.; Muenkel, C.; Borchardt, H.
1992-01-01
Each year, fog at airports renders some landing operations either difficult or impossible. The visibility that a pilot of a landing aircraft can expect is in that case the most important information. It could happen that the visibility versus the altitude is constantly decreasing or increasing. However, it is not possible to distinguish this with the existing sensors at an airport. If the visibility is decreasing with the altitude, one has the worst case - ground fog. The standard visibility sensor, the transmissometer, determines only the horizontal visual range, which will be underestimated in comparison with the real visibility a pilot has on his landing approach. Described here is a new technique to measure the slant visual range, making use of a slant scanning device - an eye-safe laser radar. A comparison with commercial visibility sensors shows that it is possible to measure visibilities with the slant looking laser radar in the range from 50 meters up to 2000 meters and even distinguish inhomogenities like ground fog.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
Numerical simulations in the development of propellant management devices
NASA Astrophysics Data System (ADS)
Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael
Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Groff, Loren; Newman, Richard L.; Foster, John V.; Crider, Dennis H.; Klyde, David H.; Huston, A. McCall
2014-01-01
Aircraft loss of control (LOC) is a leading cause of fatal accidents across all transport airplane and operational classes, and can result from a wide spectrum of hazards, often occurring in combination. Technologies developed for LOC prevention and recovery must therefore be effective under a wide variety of conditions and uncertainties, including multiple hazards, and their validation must provide a means of assessing system effectiveness and coverage of these hazards. This requires the definition of a comprehensive set of LOC test scenarios based on accident and incident data as well as future risks. This paper defines a comprehensive set of accidents and incidents over a recent 15 year period, and presents preliminary analysis results to identify worst-case combinations of causal and contributing factors (i.e., accident precursors) and how they sequence in time. Such analyses can provide insight in developing effective solutions for LOC, and form the basis for developing test scenarios that can be used in evaluating them. Preliminary findings based on the results of this paper indicate that system failures or malfunctions, crew actions or inactions, vehicle impairment conditions, and vehicle upsets contributed the most to accidents and fatalities, followed by inclement weather or atmospheric disturbances and poor visibility. Follow-on research will include finalizing the analysis through a team consensus process, defining future risks, and developing a comprehensive set of test scenarios with correlation to the accidents, incidents, and future risks. Since enhanced engineering simulations are required for batch and piloted evaluations under realistic LOC precursor conditions, these test scenarios can also serve as a high-level requirement for defining the engineering simulation enhancements needed for generating them.
Planning Education for Regional Economic Integration: The Case of Paraguay and MERCOSUR.
ERIC Educational Resources Information Center
McGinn, Noel
This paper examines the possible impact of MERCOSUR on Paraguay's economic and educational systems. MERCOSUR is a trade agreement among Argentina, Brazil, Paraguay, and Uruguay, under which terms all import tariffs among the countries will be eliminated by 1994. The countries will enter into a common economic market. The worst-case scenario…
Asteroid Bennu Temperature Maps for OSIRIS-REx Spacecraft and Instrument Thermal Analyses
NASA Technical Reports Server (NTRS)
Choi, Michael K.; Emery, Josh; Delbo, Marco
2014-01-01
A thermophysical model has been developed to generate asteroid Bennu surface temperature maps for OSIRIS-REx spacecraft and instrument thermal design and analyses at the Critical Design Review (CDR). Two-dimensional temperature maps for worst hot and worst cold cases are used in Thermal Desktop to assure adequate thermal design margins. To minimize the complexity of the Bennu geometry in Thermal Desktop, it is modeled as a sphere instead of the radar shape. The post-CDR updated thermal inertia and a modified approach show that the new surface temperature predictions are more benign. Therefore the CDR Bennu surface temperature predictions are conservative.
Availability Simulation of AGT Systems
DOT National Transportation Integrated Search
1975-02-01
The report discusses the analytical and simulation procedures that were used to evaluate the effects of failure in a complex dual mode transportation system based on a worst case study-state condition. The computed results are an availability figure ...
Carbon monoxide screen for signalized intersections COSIM, version 3.0 : technical documentation.
DOT National Transportation Integrated Search
2008-07-01
The Illinois Department of Transportation (IDOT) currently uses the computer screening model Illinois : CO Screen for Intersection Modeling (COSIM) to estimate worst-case CO concentrations for proposed roadway : projects affecting signalized intersec...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
RMP Guidance for Warehouses - Chapter 4: Offsite Consequence Analysis
Offsite consequence analysis (OCA) informs government and the public about potential consequences of an accidental toxic or flammable chemical release at your facility, and consists of a worst-case release scenario and alternative release scenarios.
RMP Guidance for Chemical Distributors - Chapter 4: Offsite Consequence Analysis
How to perform the OCA for regulated substances, informing the government and the public about potential consequences of an accidental chemical release at your facility. Includes calculations for worst-case scenario, alternative scenarios, and endpoints.
... damage to the tissue and bone supporting the teeth. In the worst cases, you can lose teeth. In gingivitis, the gums become red and swollen. ... flossing and regular cleanings by a dentist or dental hygienist. Untreated gingivitis can lead to periodontitis. If ...
NASA Technical Reports Server (NTRS)
Bury, Kristen M.; Kerslake, Thomas W.
2008-01-01
NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.
Response of the North American corn belt to climate warming, CO2
NASA Astrophysics Data System (ADS)
1983-08-01
The climate of the North American corn belt was characterized to estimate the effects of climatic change on that agricultural region. Heat and moisture characteristics of the current corn belt were identified and mapped based on a simulated climate for a doubling of atmospheric CO2 concentrations. The result was a map of the projected corn belt corresponding to the simulated climatic change. Such projections were made with and without an allowance for earlier planting dates that could occur under a CO2-induced climatic warming. Because the direct effects of CO2 increases on plants, improvements in farm technology, and plant breeding are not considered, the resulting projections represent an extreme or worst case. The results indicate that even for such a worst case, climatic conditions favoring corn production would not extend very far into Canada. Climatic buffering effects of the Great Lakes would apparently retard northeastward shifts in corn-belt location.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1985-01-01
For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.
Centaur Propellant Thermal Conditioning Study
NASA Technical Reports Server (NTRS)
Blatt, M. H.; Pleasant, R. L.; Erickson, R. C.
1976-01-01
A wicking investigation revealed that passive thermal conditioning was feasible and provided considerable weight advantage over active systems using throttled vent fluid in a Centaur D-1s launch vehicle. Experimental wicking correlations were obtained using empirical revisions to the analytical flow model. Thermal subcoolers were evaluated parametrically as a function of tank pressure and NPSP. Results showed that the RL10 category I engine was the best candidate for boost pump replacement and the option showing the lowest weight penalty employed passively cooled acquisition devices, thermal subcoolers, dry ducts between burns and pumping of subcooler coolant back into the tank. A mixing correlation was identified for sizing the thermodynamic vent system mixer. Worst case mixing requirements were determined by surveying Centaur D-1T, D-1S, IUS, and space tug vehicles. Vent system sizing was based upon worst case requirements. Thermodynamic vent system/mixer weights were determined for each vehicle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundaram, Sriram; Grenat, Aaron; Naffziger, Samuel
Power management techniques can be effective at extracting more performance and energy efficiency out of mature systems on chip (SoCs). For instance, the peak performance of microprocessors is often limited by worst case technology (Vmax), infrastructure (thermal/electrical), and microprocessor usage assumptions. Performance/watt of microprocessors also typically suffers from guard bands associated with the test and binning processes as well as worst case aging/lifetime degradation. Similarly, on multicore processors, shared voltage rails tend to limit the peak performance achievable in low thread count workloads. In this paper, we describe five power management techniques that maximize the per-part performance under the before-mentionedmore » constraints. Using these techniques, we demonstrate a net performance increase of up to 15% depending on the application and TDP of the SoC, implemented on 'Bristol Ridge,' a 28-nm CMOS, dual-core x 86 accelerated processing unit.« less
NASA Astrophysics Data System (ADS)
Bury, Kristen M.; Kerslake, Thomas W.
2008-06-01
NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
Statistical analysis of QC data and estimation of fuel rod behaviour
NASA Astrophysics Data System (ADS)
Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.
1991-02-01
The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?
La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim
2014-10-01
Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.
Learning Search Control Knowledge for Deep Space Network Scheduling
NASA Technical Reports Server (NTRS)
Gratch, Jonathan; Chien, Steve; DeJong, Gerald
1993-01-01
While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.
Availability Analysis of Dual Mode Systems
DOT National Transportation Integrated Search
1974-04-01
The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...
Part of a May 1999 series on the Risk Management Program Rule and issues related to chemical emergency management. Explains hazard versus risk, worst-case and alternative release scenarios, flammable endpoints and toxic endpoints.
General RMP Guidance - Chapter 4: Offsite Consequence Analysis
This chapter provides basic compliance information, not modeling methodologies, for people who plan to do their own air dispersion modeling. OCA is a required part of the risk management program, and involves worst-case and alternative release scenarios.
INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS
The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2011 CFR
2011-07-01
... limits of current technology, for the range of environmental conditions anticipated at your facility; and... Society for Testing of Materials (ASTM) publication F625-94, Standard Practice for Describing...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., materials, support vessels, and strategies listed are suitable, within the limits of current technology, for... equipment. Examples of acceptable terms include those defined in American Society for Testing of Materials...
Characteristics of worst hour rainfall rate for radio wave propagation modelling in Nigeria
NASA Astrophysics Data System (ADS)
Osita, Ibe; Nymphas, E. F.
2017-10-01
Radio waves especially at the millimeter-wave band are known to be attenuated by rain. Radio engineers and designers need to be able to predict the time of the day when radio signal will be attenuated so as to provide measures to mitigate this effect. This is achieved by characterizing the rainfall intensity for a particular region of interest into worst month and worst hour of the day. This paper characterized rainfall in Nigeria into worst year, worst month, and worst hour. It is shown that for the period of study, 2008 and 2009 are the worst years, while September is the most frequent worst month in most of the stations. The evening time (LT) is the worst hours of the day in virtually all the stations.
Stressful life events and catechol-O-methyl-transferase (COMT) gene in bipolar disorder.
Hosang, Georgina M; Fisher, Helen L; Cohen-Woods, Sarah; McGuffin, Peter; Farmer, Anne E
2017-05-01
A small body of research suggests that gene-environment interactions play an important role in the development of bipolar disorder. The aim of the present study is to contribute to this work by exploring the relationship between stressful life events and the catechol-O-methyl-transferase (COMT) Val 158 Met polymorphism in bipolar disorder. Four hundred eighty-two bipolar cases and 205 psychiatrically healthy controls completed the List of Threatening Experiences Questionnaire. Bipolar cases reported the events experienced 6 months before their worst depressive and manic episodes; controls reported those events experienced 6 months prior to their interview. The genotypic information for the COMT Val 158 Met variant (rs4680) was extracted from GWAS analysis of the sample. The impact of stressful life events was moderated by the COMT genotype for the worst depressive episode using a Val dominant model (adjusted risk difference = 0.09, 95% confidence intervals = 0.003-0.18, P = .04). For the worst manic episodes no significant interactions between COMT and stressful life events were detected. This is the first study to explore the relationship between stressful life events and the COMT Val 158 Met polymorphism focusing solely on bipolar disorder. The results of this study highlight the importance of the interplay between genetic and environmental factors for bipolar depression. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Zirkel, Sabrina; Pollack, Terry M.
2016-01-01
We present a case analysis of the controversy and public debate generated from a school district's efforts to address racial inequities in educational outcomes by diverting special funds from the highest performing students seeking elite college admissions to the lowest performing students who were struggling to graduate from high school.…
2008-03-01
Adversarial Tripolarity ................................................................................... VII-1 VIII. Fallen Nuclear Dominoes...power dimension, it is possible to imagine a best case (deep concert) and a worst case (adversarial tripolarity ) and some less extreme outcomes, one...vanquished and the sub-regions have settled into relative stability). 5. Adversarial U.S.-Russia-China tripolarity : In this world, the regional
ERIC Educational Resources Information Center
Marginson, Simon
This study examined the character of the emerging systems of corporate management in Australian universities and their effects on academic and administrative practices, focusing on relations of power. Case studies were conducted at 17 individual universities of various types. In each institution, interviews were conducted with senior…
Elementary Social Studies in 2005: Danger or Opportunity?--A Response to Jeff Passe
ERIC Educational Resources Information Center
Libresco, Andrea S.
2006-01-01
From the emphasis on lower-level test-prep materials to the disappearance of the subject altogether, elementary social studies is, in the best case scenario, being tested and, thus, taught with a heavy emphasis on recall; and, in the worst-case scenario, not being taught at all. In this article, the author responds to Jeff Passe's views on…
Thermal Analysis of a Metallic Wing Glove for a Mach-8 Boundary-Layer Experiment
NASA Technical Reports Server (NTRS)
Gong, Leslie; Richards, W. Lance
1998-01-01
A metallic 'glove' structure has been built and attached to the wing of the Pegasus(trademark) space booster. An experiment on the upper surface of the glove has been designed to help validate boundary-layer stability codes in a free-flight environment. Three-dimensional thermal analyses have been performed to ensure that the glove structure design would be within allowable temperature limits in the experiment test section of the upper skin of the glove. Temperature results obtained from the design-case analysis show a peak temperature at the leading edge of 490 F. For the upper surface of the glove, approximately 3 in. back from the leading edge, temperature calculations indicate transition occurs at approximately 45 sec into the flight profile. A worst-case heating analysis has also been performed to ensure that the glove structure would not have any detrimental effects on the primary objective of the Pegasus a launch. A peak temperature of 805 F has been calculated on the leading edge of the glove structure. The temperatures predicted from the design case are well within the temperature limits of the glove structure, and the worst-case heating analysis temperature results are acceptable for the mission objectives.
Formation Flying for Distributed InSAR
NASA Technical Reports Server (NTRS)
Scharf, Daniel P.; Murray, Emmanuell A.; Ploen, Scott R.; Gromov, Konstantin G.; Chen, Curtis W.
2006-01-01
We consider two spacecraft flying in formation to create interferometric synthetic aperture radar (InSAR). Several candidate orbits for such in InSar formation have been previously determined based on radar performance and Keplerian orbital dynamics. However, with out active control, disturbance-induced drift can degrade radar performance and (in the worst case) cause a collision. This study evaluates the feasibility of operating the InSAR spacecraft as a formation, that is, with inner-spacecraft sensing and control. We describe the candidate InSAR orbits, design formation guidance and control architectures and algorithms, and report the (Delta)(nu) and control acceleration requirements for the candidate orbits for several tracking performance levels. As part of determining formation requirements, a formation guidance algorithm called Command Virtual Structure is introduced that can reduce the (Delta)(nu) requirements compared to standard Leader/Follower formation approaches.
NASA Astrophysics Data System (ADS)
Olofsson, K. Erik J.; Brunsell, Per R.; Witrant, Emmanuel; Drake, James R.
2010-10-01
Recent developments and applications of system identification methods for the reversed-field pinch (RFP) machine EXTRAP T2R have yielded plasma response parameters for decoupled dynamics. These data sets are fundamental for a real-time implementable fast Fourier transform (FFT) decoupled discrete-time fixed-order strongly stabilizing synthesis as described in this work. Robustness is assessed over the data set by bootstrap calculation of the sensitivity transfer function worst-case H_{\\infty} -gain distribution. Output tracking and magnetohydrodynamic mode m = 1 tracking are considered in the same framework simply as two distinct weighted traces of a performance channel output-covariance matrix as derived from the closed-loop discrete-time Lyapunov equation. The behaviour of the resulting multivariable controller is investigated with dedicated T2R experiments.
Computer aided radiation analysis for manned spacecraft
NASA Technical Reports Server (NTRS)
Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.
1991-01-01
In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.
Efficient algorithms for computing a strong rank-revealing QR factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, M.; Eisenstat, S.C.
1996-07-01
Given an m x n matrix M with m {ge} n, it is shown that there exists a permutation {Pi} and an integer k such that the QR factorization given by equation (1) reveals the numerical rank of M: the k x k upper-triangular matrix A{sub k} is well conditioned, norm of (C{sub k}){sub 2} is small, and B{sub k} is linearly dependent on A{sub k} with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficientmore » as QR with column pivoting for most problems and take O(mn{sup 2}) floating-point operations in the worst case.« less
NASA Astrophysics Data System (ADS)
Van Zandt, James R.
2012-05-01
Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.
Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2016-08-01
We report on a comprehensive all-sky search for periodic gravitational waves in the frequency band 100-1500 Hz and with a frequency time derivative in the range of [-1.18 ,+1.00 ] ×1 0-8 Hz /s . Such a signal could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our galaxy. This search uses the data from the initial LIGO sixth science run and covers a larger parameter space with respect to any past search. A Loosely Coherent detection pipeline was applied to follow up weak outliers in both Gaussian (95% recovery rate) and non-Gaussian (75% recovery rate) bands. No gravitational wave signals were observed, and upper limits were placed on their strength. Our smallest upper limit on worst-case (linearly polarized) strain amplitude h0 is 9.7 ×1 0-25 near 169 Hz, while at the high end of our frequency range we achieve a worst-case upper limit of 5.5 ×1 0-24 . Both cases refer to all sky locations and entire range of frequency derivative values.
Zika virus in French Polynesia 2013-14: anatomy of a completed outbreak.
Musso, Didier; Bossin, Hervé; Mallet, Henri Pierre; Besnard, Marianne; Broult, Julien; Baudouin, Laure; Levi, José Eduardo; Sabino, Ester C; Ghawche, Frederic; Lanteri, Marion C; Baud, David
2018-05-01
The Zika virus crisis exemplified the risk associated with emerging pathogens and was a reminder that preparedness for the worst-case scenario, although challenging, is needed. Herein, we review all data reported during the unexpected emergence of Zika virus in French Polynesia in late 2013. We focus on the new findings reported during this outbreak, especially the first description of severe neurological complications in adults and the retrospective description of CNS malformations in neonates, the isolation of Zika virus in semen, the potential for blood-transfusion transmission, mother-to-child transmission, and the development of new diagnostic assays. We describe the effect of this outbreak on health systems, the implementation of vector-borne control strategies, and the line of communication used to alert the international community of the new risk associated with Zika virus. This outbreak highlighted the need for careful monitoring of all unexpected events that occur during an emergence, to implement surveillance and research programmes in parallel to management of cases, and to be prepared to the worst-case scenario. Copyright © 2018 Elsevier Ltd. All rights reserved.
JPS heater and sensor lightning qualification
NASA Technical Reports Server (NTRS)
Cook, M.
1989-01-01
Simulated lightning strike testing of the Redesigned Solid Rocket Motor (RSRM) field joint protection system heater assembly was performed at Thiokol Corp., Wendover Lightning Facility. Testing consisted of subjecting the lightning evaluation test article to simulated lightning strikes and evaluating the effects of heater cable transients on cables within the systems tunnel. The maximum short circuit current coupled onto a United Space Boosters, Inc. operational flight cable within the systems tunnel, induced by transients from all cables external to the systems tunnel, was 92 amperes. The maximum open-circuit voltage coupled was 316 volts. The maximum short circuit current coupled onto a United Space Boosters, Inc. operational flight cable within the systems tunnel, induced by heater power cable transients only, was 2.7 amperes; the maximum open-circuit voltage coupled was 39 volts. All heater power cable induced coupling was due to simulated lightning discharges only, no heater operating power was applied during the test. The results showed that, for a worst-case lightning discharge, the heater power cable is responsible for a 3.9 decibel increase in voltage coupling to operational flight cables within the systems tunnel. Testing also showed that current and voltage levels coupled onto cables within the systems tunnel are partially dependant on the relative locations of the cables within the systems tunnel.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.
DOT National Transportation Integrated Search
2013-06-01
Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...
Global climate change: The quantifiable sustainability challenge
Population growth and the pressures spawned by increasing demands for energy and resource-intensive goods, foods and services are driving unsustainable growth in greenhouse gas (GHG) emissions. Recent GHG emission trends are consistent with worst-case scenarios of the previous de...
NASA Astrophysics Data System (ADS)
Syrakov, Dimiter; Veleva, Blagorodka; Georgievs, Emilia; Prodanova, Maria; Slavov, Kiril; Kolarova, Maria
2014-05-01
The development of the Bulgarian Emergency Response System (BERS) for short term forecast in case of accidental radioactive releases to the atmosphere has been started in the mid 1990's [1]. BERS comprises of two main parts - operational and accidental, for two regions 'Europe' and 'Northern Hemisphere'. The operational part runs automatically since 2001 using the 72 hours meteorological forecast from DWD Global model, resolution in space of 1.5o and in time - 12 hours. For specified Nuclear power plants (NPPs), 3 days trajectories are calculated and presented on NIMH's specialized Web-site (http://info.meteo.bg/ews/). The accidental part is applied when radioactive releases are reported or in case of emergency exercises. BERS is based on numerical weather forecast information and long-range dispersion model accounting for the transport, dispersion, and radioactive transformations of pollutants. The core of the accidental part of the system is the Eulerian 3D dispersion model EMAP calculating concentration and deposition fields [2]. The system is upgraded with a 'dose calculation module' for estimation of the prognostic dose fields of 31 important radioactive gaseous and aerosol pollutants. The prognostic doses significant for the early stage of a nuclear accident are calculated as follows: the effective doses from external irradiation (air submersion + ground shinning); effective dose from inhalation; summarized effective dose and absorbed thyroid dose [3]. The output is given as 12, 24, 36, 48, 60 and 72 hours prognostic dose fields according the updated meteorology. The BERS was upgraded to simulate the dispersion of nuclear materials from Fukushima NPP [4], and results were presented in NIMH web-site. In addition BERS took part in the respective ENSEMBLE exercises to model 131I and 137Cs in Fukushima source term. In case of governmental request for expertise BERS was applied for environmental impact assessment of hypothetical accidental transboundary radioactive pollution. The consequences were estimated based on the worst emission scenario for the existing basic reactor type, selection of real meteorological forecast conditions, favoring the direct transport of the contaminated air masses to the territory of the country in consideration. In the present work BERS is used to estimate the worst case accidental scenario impact from a possible new unit of Paks Nuclear Power Plant, Hungary over the territory of Bulgaria. 1. D.Syrakov, M.Prodanova, 1998, Atmospheric Environment, 32 (24), 4367-4375. 2. D. Syrakov, M. Prodanova, K. Slavov, Inernationsal J. Environment and Pollution, 20, 1-6 (2003) 286-296. 3. D. Syrakov, B. Veleva, M. Prodanova, T. Popova, M. Kolarova, Journal of Environmental Radioactivity 100 (2009) 151-156. 4. D.Syrakov, M Prodanova, J. Intern. Sci. Publ.: Ecology & Safety Vol. 6 Part 1 (2011) 94-102. www.scientific-publications.net.
Programmable Logic Application Notes
NASA Technical Reports Server (NTRS)
Katz, Richard
2000-01-01
This column will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will start a series of notes concentrating on analysis techniques with this issues section discussing worst-case analysis requirements.
NASA Astrophysics Data System (ADS)
Michael, Ralph; Wegener, Alfred
2004-08-01
Hazards from the optical radiation of an operating microscope that cause damage at the corneal, lenticular, and retinal levels were investigated; we considered, in particular, ultraviolet radiation (UVR) and blue light. The spectral irradiance from a Zeiss operation microscope OPMI VISU 200 was measured in the corneal plane between 300 and 1100 nm. Effective irradiance and radiance were calculated with relative spectral effectiveness data from the American Conference for Governmental and Industrial Hygienists. Safe exposure time to avoid UVR injury to the lens and cornea was found to be 2 h without a filter, 4 h with a UVR filter, 200 h with a yellow filter, and 400 h with a filter combination. Safe exposure time to avoid retinal photochemical injury was found to be 3 min without a filter and with a UVR filter, 10 min with a yellow filter, and 49 min with a filter combination. The effective radiance limit for retinal thermal injury was not exceeded. The hazard due to the UVR component from the operating microscope is not critical, and operation time can be safely prolonged with the use of appropriate filters. The retinal photochemical hazard appears critical without appropriate filters, permitting only some minutes of safe exposure time. The calculated safe exposure times are for worst-case conditions and maximal light output and include a safety factor.
Michael, Ralph; Wegener, Alfred
2004-08-01
Hazards from the optical radiation of an operating microscope that cause damage at the corneal, lenticular, and retinal levels were investigated; we considered, in particular, ultraviolet radiation (UVR) and blue light. The spectral irradiance from a Zeiss operation microscope OPMI VISU 200 was measured in the corneal plane between 300 and 1100 nm. Effective irradiance and radiance were calculated with relative spectral effectiveness data from the American Conference for Governmental and Industrial Hygienists. Safe exposure time to avoid UVR injury to the lens and cornea was found to be 2 h without a filter, 4 h with a UVR filter, 200 a yellow filter, and 400 h with a filter combination. Safe exposure time to avoid retinal photochemical injury was found to be 3 min without a filter and with a UVR filter, 10 min with a yellow filter, and 49 min with a filter combination. The effective radiance limit for retinal thermal injury was not exceeded. The hazard due to the UVR component from the operating microscope is not critical, and operation time can be safely prolonged with the use of appropriate filters. The retinal photochemical hazard appears critical without appropriate filters, permitting only some minutes of safe exposure time. The calculated safe exposure times are for worst-case conditions and maximal light output and include a safety factor.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
Selective robust optimization: A new intensity-modulated proton therapy optimization strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yupeng; Niemela, Perttu; Siljamaki, Sami
2015-08-15
Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less
Improving Strategies via SMT Solving
NASA Astrophysics Data System (ADS)
Gawlitza, Thomas Martin; Monniaux, David
We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of Gawlitza and Seidl [17]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Π2 P -complete.
Dimitroulopoulou, C; Lucica, E; Johnson, A; Ashmore, M R; Sakellaris, I; Stranger, M; Goelen, E
2015-12-01
Consumer products are frequently and regularly used in the domestic environment. Realistic estimates for product use are required for exposure modelling and health risk assessment. This paper provides significant data that can be used as input for such modelling studies. A European survey was conducted, within the framework of the DG Sanco-funded EPHECT project, on the household use of 15 consumer products. These products are all-purpose cleaners, kitchen cleaners, floor cleaners, glass and window cleaners, bathroom cleaners, furniture and floor polish products, combustible air fresheners, spray air fresheners, electric air fresheners, passive air fresheners, coating products for leather and textiles, hair styling products, spray deodorants and perfumes. The analysis of the results from the household survey (1st phase) focused on identifying consumer behaviour patterns (selection criteria, frequency of use, quantities, period of use and ventilation conditions during product use). This can provide valuable input to modelling studies, as this information is not reported in the open literature. The above results were further analysed (2nd phase), to provide the basis for the development of 'most representative worst-case scenarios' regarding the use of the 15 products by home-based population groups (housekeepers and retired people), in four geographical regions in Europe. These scenarios will be used for the exposure and health risk assessment within the EPHECT project. To the best of our knowledge, it is the first time that daily worst-case scenarios are presented in the scientific published literature concerning the use of a wide range of 15 consumer products across Europe. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Ecological risk estimation of organophosphorus pesticides in riverine ecosystems.
Wee, Sze Yee; Aris, Ahmad Zaharin
2017-12-01
Pesticides are of great concern because of their existence in ecosystems at trace concentrations. Worldwide pesticide use and its ecological impacts (i.e., altered environmental distribution and toxicity of pesticides) have increased over time. Exposure and toxicity studies are vital for reducing the extent of pesticide exposure and risk to the environment and humans. Regional regulatory actions may be less relevant in some regions because the contamination and distribution of pesticides vary across regions and countries. The risk quotient (RQ) method was applied to assess the potential risk of organophosphorus pesticides (OPPs), primarily focusing on riverine ecosystems. Using the available ecotoxicity data, aquatic risks from OPPs (diazinon and chlorpyrifos) in the surface water of the Langat River, Selangor, Malaysia were evaluated based on general (RQ m ) and worst-case (RQ ex ) scenarios. Since the ecotoxicity of quinalphos has not been well established, quinalphos was excluded from the risk assessment. The calculated RQs indicate medium risk (RQ m = 0.17 and RQ ex = 0.66; 0.1 ≤ RQ < 1) of overall diazinon. The overall chlorpyrifos exposure was observed at high risk (RQ ≥ 1) based on RQ m and RQ ex at 1.44 and 4.83, respectively. A contradictory trend of RQs > 1 (high risk) was observed for both the general and worst cases of chlorpyrifos, but only for the worst cases of diazinon at all sites from downstream to upstream regions. Thus, chlorpyrifos posed a higher risk than diazinon along the Langat River, suggesting that organisms and humans could be exposed to potentially high levels of OPPs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach.
Zakov, Shay; Tsur, Dekel; Ziv-Ukelson, Michal
2011-08-18
RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.
Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach
2011-01-01
Background RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. Results We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. Conclusions The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms. PMID:21851589
Walser, Tobias; Juraske, Ronnie; Demou, Evangelia; Hellweg, Stefanie
2014-01-01
A pronounced presence of toluene from rotogravure printed matter has been frequently observed indoors. However, its consequences to human health in the life cycle of magazines are poorly known. Therefore, we quantified human-health risks in indoor environments with Risk Assessment (RA) and impacts relative to the total impact of toxic releases occurring in the life cycle of a magazine with Life Cycle Assessment (LCA). We used a one-box indoor model to estimate toluene concentrations in printing facilities, newsstands, and residences in a best, average, and worst-case scenario. The modeled concentrations are in the range of the values measured in on-site campaigns. Toluene concentrations can be close or even surpass the occupational legal thresholds in printing facilities in realistic worst-case scenarios. The concentrations in homes can surpass the US EPA reference dose (69 μg/kg/day) in worst-case scenarios, but are still at least 1 order of magnitude lower than in press rooms or newsstands. However, toluene inhaled at home becomes the dominant contribution to the total potential human toxicity impacts of toluene from printed matter when assessed with LCA, using the USEtox method complemented with indoor characterization factors for toluene. The significant contribution (44%) of toluene exposure in production, retail, and use in households, to the total life cycle impact of a magazine in the category of human toxicity, demonstrates that the indoor compartment requires particular attention in LCA. While RA works with threshold levels, LCA assumes that every toxic emission causes an incremental change to the total impact. Here, the combination of the two paradigms provides valuable information on the life cycle stages of printed matter.
Boehmler, Erick M.; Degnan, James R.
1997-01-01
year discharges. In addition, the incipient roadway-overtopping discharge is determined and analyzed as another potential worst-case scour scenario. Total scour at a highway crossing is comprised of three components: 1) long-term streambed degradation; 2) contraction scour (due to accelerated flow caused by a reduction in flow area at a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute depths for contraction and local scour and a summary of the results of these computations follows. Contraction scour for all modelled flows ranged from 1.2 to 1.8 feet. The worst-case contraction scour occurred at the incipient overtopping discharge, which is less than the 500-year discharge. Abutment scour ranged from 17.7 to 23.7 feet. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Li-Ion Battery By-Pass Removal Qualification
NASA Astrophysics Data System (ADS)
Borthomieu, Y.; Pasquier, E.
2005-05-01
The reasons of the by-pass use on Space batteries is to avoid open circuit, short-circuit and dramatic performances drift on the power system. By-pass diodes are currently used in NiH2 batteries due to the high probability of open circuit at cell level. This probability is mainly linked to the possibility to have a hydrogen leak within the pressure vessel due to the high operating pressure (70 bars) that can induce cell open circuit.For the Lithium-Ion batteries, first items had bypass implemented by similarity, but:All the cell failure cases have been analyzed at battery level:- Cell Open circuit:In contrast to NiCd and NiH2 cells, Li-Ion cells can be put in parallel due to the fact the open circuit voltage (OCV) is linked to the State Of Charge (SOC).With cells in parallel, a battery open circuit failure can never be encountered even with a cell in open circuit.- Cell Short circuit:In case of cell short, the entire cells within the module will be shorted.- Cell capacity spread:If the capacities of cells in series are strongly diverging, the worst module limits the battery. In case the battery is no more able to deliver the requested power for which it was designed, the worst module has to be reversed. In reversal, a Li-Ion cell is self-shorted. So, the strong capacity decrease in one module leads to the short of this module.These three failure cases cover all the possible Li-Ion failure root causes.Considering these three events, the analysis demonstrates that the Li-Ion battery still functions in any case without any by-pass system because the design of the battery size always takes into account the loss of one module.Nevertheless, the by-pass removal should allow to:- Improve the battery reliability as each bypass unit represents a single - Reduce by at least 30 % of the total price of the battery,- Reduce significant weight at battery level,- Shorten the battery manufacturing lead time (at least8 months for by-pass purchasing), - Avoid US export licenses.A formal qualification of a Li-Ion battery without by- pass system is on going in the frame of an ESA ARTES 3 contract.
DOT National Transportation Integrated Search
1996-04-01
Hurricane Andrew, which struck South Dade County, Florida on the morning of 24 August 1992, was the "worst natural disaster ever to hit the United States..." The capabilities of the local and state governments to respond to the disaster were quickly ...
... reaction can vary from mild to severe. In rare cases, the person with the rash needs to be treated in the hospital. The worst symptoms are often seen during days 4 to 7 after coming in contact with the plant. The rash may last for 1 to 3 ...
Closed Environment Module - Modularization and extension of the Virtual Habitat
NASA Astrophysics Data System (ADS)
Plötner, Peter; Czupalla, Markus; Zhukov, Anton
2013-12-01
The Virtual Habitat (V-HAB), is a Life Support System (LSS) simulation, created to perform dynamic simulation of LSS's for future human spaceflight missions. It allows the testing of LSS robustness by means of computer simulations, e.g. of worst case scenarios.
Management of reliability and maintainability; a disciplined approach to fleet readiness
NASA Technical Reports Server (NTRS)
Willoughby, W. J., Jr.
1981-01-01
Material acquisition fundamentals were reviewed and include: mission profile definition, stress analysis, derating criteria, circuit reliability, failure modes, and worst case analysis. Military system reliability was examined with emphasis on the sparing of equipment. The Navy's organizational strategy for 1980 is presented.
Empirical Modeling Of Single-Event Upset
NASA Technical Reports Server (NTRS)
Zoutendyk, John A.; Smith, Lawrence S.; Soli, George A.; Thieberger, Peter; Smith, Stephen L.; Atwood, Gregory E.
1988-01-01
Experimental study presents examples of empirical modeling of single-event upset in negatively-doped-source/drain metal-oxide-semiconductor static random-access memory cells. Data supports adoption of simplified worst-case model in which cross sectionof SEU by ion above threshold energy equals area of memory cell.
Kennedy, Reese D; Cheavegatti-Gianotto, Adriana; de Oliveira, Wladecir S; Lirette, Ronald P; Hjelle, Jerry J
2018-01-01
Insect-protected sugarcane that expresses Cry1Ab has been developed in Brazil. Analysis of trade information has shown that effectively all the sugarcane-derived Brazilian exports are raw or refined sugar and ethanol. The fact that raw and refined sugar are highly purified food ingredients, with no detectable transgenic protein, provides an interesting case study of a generalized safety assessment approach. In this study, both the theoretical protein intakes and safety assessments of Cry1Ab, Cry1Ac, NPTII, and Bar proteins used in insect-protected biotechnology crops were examined. The potential consumption of these proteins was examined using local market research data of average added sugar intakes in eight diverse and representative Brazilian raw and refined sugar export markets (Brazil, Canada, China, Indonesia, India, Japan, Russia, and the USA). The average sugar intakes, which ranged from 5.1 g of added sugar/person/day (India) to 126 g sugar/p/day (USA) were used to calculated possible human exposure. The theoretical protein intake estimates were carried out in the "Worst-case" scenario, assumed that 1 μg of newly-expressed protein is detected/g of raw or refined sugar; and the "Reasonable-case" scenario assumed 1 ng protein/g sugar. The "Worst-case" scenario was based on results of detailed studies of sugarcane processing in Brazil that showed that refined sugar contains less than 1 μg of total plant protein /g refined sugar. The "Reasonable-case" scenario was based on assumption that the expression levels in stalk of newly-expressed proteins were less than 0.1% of total stalk protein. Using these calculated protein intake values from the consumption of sugar, along with the accepted NOAEL levels of the four representative proteins we concluded that safety margins for the "Worst-case" scenario ranged from 6.9 × 10 5 to 5.9 × 10 7 and for the "Reasonable-case" scenario ranged from 6.9 × 10 8 to 5.9 × 10 10 . These safety margins are very high due to the extremely low possible exposures and the high NOAELs for these non-toxic proteins. This generalized approach to the safety assessment of highly purified food ingredients like sugar illustrates that sugar processed from Brazilian GM varieties are safe for consumption in representative markets globally.
Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction
NASA Technical Reports Server (NTRS)
Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.
2006-01-01
The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.
Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc
2013-12-01
The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.
Full band all-sky search for periodic gravitational waves in the O1 LIGO data
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allen, G.; Allocca, A.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Angelova, S. V.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Atallah, D. V.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Austin, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barkett, K.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bawaj, M.; Bayley, J. C.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Bero, J. J.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Biscoveanu, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonilla, E.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bossie, K.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderón; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerdá-Durán, P.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chase, E.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, X.; Chen, Y.; Cheng, H.-P.; Chia, H. Y.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciecielag, P.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Clearwater, P.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Cohen, D.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Cordero-Carrión, I.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, E. T.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Dálya, G.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Demos, N.; Denker, T.; Dent, T.; De Pietri, R.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; De Rossi, C.; DeSalvo, R.; de Varona, O.; Devenson, J.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorosh, O.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Dreissigacker, C.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dupej, P.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Estevez, D.; Etienne, Z. B.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fee, C.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Finstad, D.; Fiori, I.; Fiorucci, D.; Fishbach, M.; Fisher, R. P.; Fitz-Axen, M.; Flaminio, R.; Fletcher, M.; Fong, H.; Font, J. A.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garcia-Quiros, C.; Garufi, F.; Gateley, B.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; Goncharov, B.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Gretarsson, E. M.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Halim, O.; Hall, B. R.; Hall, E. D.; Hamilton, E. Z.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinderer, T.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hreibi, A.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kamai, B.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, K.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kinley-Hanlon, M.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Knowles, T. D.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Linker, S. D.; Littenberg, T. B.; Liu, J.; Lo, R. K. L.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macas, R.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Markowitz, A.; Maros, E.; Marquina, A.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Mason, K.; Massera, E.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McNeill, L.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, B. B.; Miller, J.; Millhouse, M.; Milovich-Goff, M. C.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moffa, D.; Moggi, A.; Mogushi, K.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muñiz, E. A.; Muratore, M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Neilson, J.; Nelemans, G.; Nelson, T. J. N.; Nery, M.; Neunzert, A.; Nevin, L.; Newport, J. M.; Newton, G.; Ng, K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; North, C.; Nuttall, L. K.; Oberling, J.; O'Dea, G. D.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Okada, M. A.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ossokine, S.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, Howard; Pan, Huang-Wei; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Parida, A.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patil, M.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pirello, M.; Pisarski, A.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Pratten, G.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rajbhandari, B.; Rakhmanov, M.; Ramirez, K. E.; Ramos-Buades, A.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Ren, W.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Rutins, G.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sanchez, L. E.; Sanchis-Gual, N.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheel, M.; Scheuer, J.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shaner, M. B.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, L. P.; Singh, A.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Somala, S.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staats, K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stevenson, S. P.; Stone, R.; Stops, D. J.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Strunk, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Suresh, J.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Tait, S. C.; Talbot, C.; Talukder, D.; Tanner, D. B.; Tao, D.; Tápai, M.; Taracchini, A.; Tasson, J. D.; Taylor, J. A.; Taylor, R.; Tewari, S. V.; Theeg, T.; Thies, F.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torres-Forné, A.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tsukada, L.; Tsuna, D.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, W. H.; Wang, Y. F.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westerweck, J.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Wilken, D.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wysocki, D. M.; Xiao, S.; Yamamoto, H.; Yancey, C. C.; Yang, L.; Yap, M. J.; Yazback, M.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadroźny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2018-05-01
We report on a new all-sky search for periodic gravitational waves in the frequency band 475-2000 Hz and with a frequency time derivative in the range of [-1.0 ,+0.1 ] ×1 0-8 Hz /s . Potential signals could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our Galaxy. This search uses the data from Advanced LIGO's first observational run O1. No gravitational-wave signals were observed, and upper limits were placed on their strengths. For completeness, results from the separately published low-frequency search 20-475 Hz are included as well. Our lowest upper limit on worst-case (linearly polarized) strain amplitude h0 is ˜4 ×1 0-25 near 170 Hz, while at the high end of our frequency range, we achieve a worst-case upper limit of 1.3 ×1 0-24. For a circularly polarized source (most favorable orientation), the smallest upper limit obtained is ˜1.5 ×1 0-25.
Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?
NASA Astrophysics Data System (ADS)
Ferrie, Chris; Blume-Kohout, Robin
2011-03-01
Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.
Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Rault, Didier F. G.
1994-01-01
A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.
Correct consideration of the index of refraction using blackbody radiation.
Hartmann, Jurgen
2006-09-04
The correct consideration of the index of refraction when using blackbody radiators as standard sources for optical radiation is derived and discussed. It is shown that simply using the index of refraction of air at laboratory conditions is not sufficient. A combination of the index of refraction of the media used inside the blackbody radiator and for the optical path between blackbody and detector has to be used instead. A worst case approximation for the introduced error when neglecting these effects is presented, showing that the error is below 0.1 % for wavelengths above 200 nm. Nevertheless, for the determination of the spectral radiance for the purpose of radiation temperature measurements the correct consideration of the refractive index is mandatory. The worst case estimation reveals that the introduced error in temperature at a blackbody temperature of 3000 degrees C can be as high as 400 mk at a wavelength of 650 nm and even higher at longer wavelengths.
Burns, Ronda L.; Severance, Timothy
1997-01-01
Contraction scour for all modelled flows ranged from 15.8 to 22.5 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.7 to 11.1 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
A CMOS matrix for extracting MOSFET parameters before and after irradiation
NASA Technical Reports Server (NTRS)
Blaes, B. R.; Buehler, M. G.; Lin, Y.-S.; Hicks, K. A.
1988-01-01
An addressable matrix of 16 n- and 16 p-MOSFETs was designed to extract the dc MOSFET parameters for all dc gate bias conditions before and after irradiation. The matrix contains four sets of MOSFETs, each with four different geometries that can be biased independently. Thus the worst-case bias scenarios can be determined. The MOSFET matrix was fabricated at a silicon foundry using a radiation-soft CMOS p-well LOCOS process. Co-60 irradiation results for the n-MOSFETs showed a threshold-voltage shift of -3 mV/krad(Si), whereas the p-MOSFETs showed a shift of 21 mV/krad(Si). The worst-case threshold-voltage shift occurred for the n-MOSFETs, with a gate bias of 5 V during the anneal. For the p-MOSFETs, biasing did not affect the shift in the threshold voltage. A parasitic MOSFET dominated the leakage of the n-MOSFET biased with 5 V on the gate during irradiation. Co-60 test results for other parameters are also presented.
Mad cows and computer models: the U.S. response to BSE.
Ackerman, Frank; Johnecheck, Wendy A
2008-01-01
The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.
Modelling the long-term evolution of worst-case Arctic oil spills.
Blanken, Hauke; Tremblay, Louis Bruno; Gaskin, Susan; Slavin, Alexander
2017-03-15
We present worst-case assessments of contamination in sea ice and surface waters resulting from hypothetical well blowout oil spills at ten sites in the Arctic Ocean basin. Spill extents are estimated by considering Eulerian passive tracers in the surface ocean of the MITgcm (a hydrostatic, coupled ice-ocean model). Oil in sea ice, and contamination resulting from melting of oiled ice, is tracked using an offline Lagrangian scheme. Spills are initialized on November 1st 1980-2010 and tracked for one year. An average spill was transported 1100km and potentially affected 1.1 million km 2 . The direction and magnitude of simulated oil trajectories are consistent with known large-scale current and sea ice circulation patterns, and trajectories frequently cross international boundaries. The simulated trajectories of oil in sea ice match observed ice drift trajectories well. During the winter oil transport by drifting sea ice is more significant than transport with surface currents. Copyright © 2017 Elsevier Ltd. All rights reserved.
A novel N-input voting algorithm for X-by-wire fault-tolerant systems.
Karimi, Abbas; Zarafshan, Faraneh; Al-Haddad, S A R; Ramli, Abdul Rahman
2014-01-01
Voting is an important operation in multichannel computation paradigm and realization of ultrareliable and real-time control systems that arbitrates among the results of N redundant variants. These systems include N-modular redundant (NMR) hardware systems and diversely designed software systems based on N-version programming (NVP). Depending on the characteristics of the application and the type of selected voter, the voting algorithms can be implemented for either hardware or software systems. In this paper, a novel voting algorithm is introduced for real-time fault-tolerant control systems, appropriate for applications in which N is large. Then, its behavior has been software implemented in different scenarios of error-injection on the system inputs. The results of analyzed evaluations through plots and statistical computations have demonstrated that this novel algorithm does not have the limitations of some popular voting algorithms such as median and weighted; moreover, it is able to significantly increase the reliability and availability of the system in the best case to 2489.7% and 626.74%, respectively, and in the worst case to 3.84% and 1.55%, respectively.
NASA Astrophysics Data System (ADS)
Nigon, R.; Raeder, T. M.; Muralt, P.
2017-05-01
The accurate evaluation of ferroelectric thin films operated with interdigitated electrodes is quite a complex task. In this article, we show how to correct the electric field and the capacitance in order to obtain identical polarization and CV loops for all geometrical variants. The simplest model is compared with corrections derived from Schwartz-Christoffel transformations, and with finite element simulations. The correction procedure is experimentally verified, giving almost identical curves for a variety of gaps and electrode widths. It is shown that the measured polarization change corresponds to the average polarization change in the center plane between the electrode fingers, thus at the position where the electric field is most homogeneous with respect to the direction and size. The question of maximal achievable polarization in the various possible textures, and compositional types of polycrystalline lead zirconate titanate thin films is revisited. In the best case, a soft (110) textured thin film with the morphotropic phase boundary composition should yield a value of 0.95Ps, and in the worst case, a rhombohedral (100) textured thin film should deliver a polarization of 0.74Ps.
Single-Event Effect Performance of a Conductive-Bridge Memory EEPROM
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Berg, Melanie; Kim, Hak; Phan, Anthony; Figueiredo, Marco; Seidleck, Christina; LaBel, Kenneth
2015-01-01
We investigated the heavy ion single-event effect (SEE) susceptibility of the industry’s first stand-alone memory based on conductive-bridge memory (CBRAM) technology. The device is available as an electrically erasable programmable read-only memory (EEPROM). We found that single-event functional interrupt (SEFI) is the dominant SEE type for each operational mode (standby, dynamic read, and dynamic write/read). SEFIs occurred even while the device is statically biased in standby mode. Worst case SEFIs resulted in errors that filled the entire memory space. Power cycle did not always clear the errors. Thus the corrupted cells had to be reprogrammed in some cases. The device is also vulnerable to bit upsets during dynamic write/read tests, although the frequency of the upsets are relatively low. The linear energy transfer threshold for cell upset is between 10 and 20 megaelectron volts per square centimeter per milligram, with an upper limit cross section of 1.6 times 10(sup -11) square centimeters per bit (95 percent confidence level) at 10 megaelectronvolts per square centimeter per milligram. In standby mode, the CBRAM array appears invulnerable to bit upsets.
Köster, Christina; Heller, Günther; Wrede, Stephanie; König, Thomas; Handstein, Steffen; Szecsenyi, Joachim
2015-08-31
Numerous studies from around the world have shown a positive association between case numbers and the quality of medical care. The evidence to date suggests that conformity to guidelines for the treatment of patients with breast cancer is better in German hospitals that have higher case numbers. We used data obtained by an external program for quality assurance in inpatient care (externe stationäre Qualitätssicherung, esQS) for the years 2013 and 2014 to investigate seven process indicators in the area of breast surgery, including histologic confirmation of the diagnosis before definitive treatment, axillary dissection as recommended by the guidelines, and an appropriate temporal interval between diagnosis and operation. Case numbers were categorized with the aid of various threshold values. Moreover, subgroup analyses were carried out for patients under age 65, patients in good general health, patients without lymph-node involvement, and patients with a tumor size pT0 or pT1 or an overall tumor size less than 5 cm. Data on 153,475 patients from 939 hospitals were analyzed. Six of seven indicators had values that were better overall, to a statistically significant extent, in hospitals with higher case numbers. Although this relationship was not consistently seen, the worst results were generally found in the category with the lowest case numbers. Similar though less striking results were obtained in the subgroup analyses. An exception to the general finding was that, in hospitals with higher case numbers, the interval between diagnosis and operation was more often longer than three weeks. Guideline adherence is higher in hospitals that treat more cases. The present study does not address the question whether this, in turn, affects morbidity or mortality. To improve process quality in peripheral hospitals, the quality assurance program should be continued.
Power and Particle Balance Calculations with Impurities in NSTX
NASA Astrophysics Data System (ADS)
Holland, C. G.; Maingi, R.; Owen, L. W.; Kaye, S. M.
1998-11-01
We reported the development C. Holland, et. al., Bull. Am. Phys. Soc. 42 (1997) 1927. and application R. Maingi et al., Proc. 3rd International Workshop on Spherical Tori, Sept. 3-5, 1997, St. Petersburg, Russia. of a Graphical User Interface to assess the important terms for edge and divertor plasma calculations for NSTX with the b2.5 edge plasma transport code B. Braams, Contrib. Plasma Phys. 36 (1996) 276.. The goals of those calculations were to estimate the worst case peak heat flux for plasma-facing component design, and the radiation requirements to reduce the peak heat flux. In this study we present the first simulations with intrinsic carbon impurity radiation. We find in general that the intrinsic carbon radiation should be sufficient to provide a wide operation window for the NSTX device. Details of the relative importance of heat flux transport mechanisms as determined with the GUI will be presented.
Fast Inference with Min-Sum Matrix Product.
Felzenszwalb, Pedro F; McAuley, Julian J
2011-12-01
The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.
Extensor Mechanism Disruption after Total Knee Arthroplasty: A Case Series and Review of Literature.
Vaishya, Raju; Agarwal, Amit Kumar; Vijay, Vipul
2016-02-04
Extensor mechanism disruption following total knee arthroplasty (TKA) is a rare but devastating complication. These patients may require revision of the implants, but even then, it may not be possible to restore the normal function of the knee after the disruption. The patterns of extensor mechanism disruption can broadly be classified into three types: suprapatellar (quadriceps tendon rupture), transpatellar (patellar fracture), or infrapatellar (patellar tendon rupture). Infrapatellar tendon ruptures are the worst injuries, as they carry maximum morbidity and are challenging to manage. The disruption of the extensor mechanism may occur either intra-operatively or in the immediate postoperative period due to an injury. The treatment of extensor mechanism complications after TKA may include either nonsurgical management or surgical intervention in the form of primary repair or reconstruction with autogenous, allogeneic, or synthetic substitutes. We have provided an algorithm for the management of extensor mechanism disruption after TKA.
An Air Revitalization Model (ARM) for Regenerative Life Support Systems (RLSS)
NASA Technical Reports Server (NTRS)
Hart, Maxwell M.
1990-01-01
The primary objective of the air revitalization model (ARM) is to determine the minimum buffer capacities that would be necessary for long duration space missions. Several observations are supported by the current configuration sizes: the baseline values for each gas and the day to day or month to month fluctuations that are allowed. The baseline values depend on the minimum safety tolerances and the quantities of life support consumables necessary to survive the worst case scenarios within those tolerances. Most, it not all, of these quantities can easily be determined by ARM once these tolerances are set. The day to day fluctuations also require a command decision. It is already apparent from the current configuration of ARM that the tighter these fluctuations are controlled, the more energy used, the more nonregenerable hydrazine consumed, and the larger the required capacities for the various gas generators. All of these relationships could clearly be quantified by one operational ARM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel; Makarov, PNNL Yuri; Subbarao, PNNL Kris
RUT software is designed for use by the Balancing Authorities to predict and display additional requirements caused by the variability and uncertainty in load and generation. The prediction is made for the next operating hours as well as for the next day. The tool predicts possible deficiencies in generation capability and ramping capability. This deficiency of balancing resources can cause serious risks to power system stability and also impact real-time market energy prices. The tool dynamically and adaptively correlates changing system conditions with the additional balancing needs triggered by the interplay between forecasted and actual load and output of variablemore » resources. The assessment is performed using a specially developed probabilistic algorithm incorporating multiple sources of uncertainty including wind, solar and load forecast errors. The tool evaluates required generation for a worst case scenario, with a user-specified confidence level.« less
Temperature Distribution Measurement of The Wing Surface under Icing Conditions
NASA Astrophysics Data System (ADS)
Isokawa, Hiroshi; Miyazaki, Takeshi; Kimura, Shigeo; Sakaue, Hirotaka; Morita, Katsuaki; Japan Aerospace Exploration Agency Collaboration; Univ of Notre Dame Collaboration; Kanagawa Institute of Technology Collaboration; Univ of Electro-(UEC) Team, Comm
2016-11-01
De- or anti-icing system of an aircraft is necessary for a safe flight operation. Icing is a phenomenon which is caused by a collision of supercooled water frozen to an object. For the in-flight icing, it may cause a change in the wing cross section that causes stall, and in the worst case, the aircraft would fall. Therefore it is important to know the surface temperature of the wing for de- or anti-icing system. In aerospace field, temperature-sensitive paint (TSP) has been widely used for obtaining the surface temperature distribution on a testing article. The luminescent image from the TSP can be related to the temperature distribution. (TSP measurement system) In icing wind tunnel, we measured the surface temperature distribution of the wing model using the TSP measurement system. The effect of icing conditions on the TSP measurement system is discussed.
Continuity planning for workplace infectious diseases.
Welch, Nancy; Miller, Pamela Blair; Engle, Lisa
2016-01-01
Traditionally, business continuity plans prepare for worst-case scenarios; people plan for the exception rather than the common. Plans focus on infrastructure damage and recovery wrought by such disasters as hurricanes, terrorist events or tornadoes. Yet, another very real threat looms present every day, every season and can strike without warning, wreaking havoc on the major asset -- human capital. Each year, millions of dollars are lost in productivity, healthcare costs, absenteeism and services due to infectious, communicable diseases. Sound preventive risk management and recovery strategies can avert this annual decimation of staff and ensure continuous business operation. This paper will present a strong economic justification for the recognition, prevention and mitigation of communicable diseases as a routine part of continuity planning for every business. Recommendations will also be provided for environmental/engineering controls as well as personnel policies that address employee and customer protection, supply chain contacts and potential legal issues.
NASA Astrophysics Data System (ADS)
Fatrias, D.; Kamil, I.; Meilani, D.
2018-03-01
Coordinating business operation with suppliers becomes increasingly important to survive and prosper under the dynamic business environment. A good partnership with suppliers not only increase efficiency, but also strengthen corporate competitiveness. Associated with such concern, this study aims to develop a practical approach of multi-criteria supplier evaluation using combined methods of Taguchi loss function (TLF), best-worst method (BWM) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR). A new framework of integrative approach adopting these methods is our main contribution for supplier evaluation in literature. In this integrated approach, a compromised supplier ranking list based on the loss score of suppliers is obtained using efficient steps of a pairwise comparison based decision making process. Implemetation to the case problem with real data from crumb rubber industry shows the usefulness of the proposed approach. Finally, a suitable managerial implication is presented.
Analysis of on-orbit thermal characteristics of the 15-meter hoop/column antenna
NASA Technical Reports Server (NTRS)
Andersen, Gregory C.; Farmer, Jeffery T.; Garrison, James
1987-01-01
In recent years, interest in large deployable space antennae has led to the development of the 15 meter hoop/column antenna. The thermal environment the antenna is expected to experience during orbit is examined and the temperature distributions leading to reflector surface distortion errors are determined. Two flight orientations corresponding to: (1) normal operation, and (2) use in a Shuttle-attached flight experiment are examined. A reduced element model was used to determine element temperatures at 16 orbit points for both flight orientations. The temperature ranged from a minimum of 188 K to a maximum of 326 K. Based on the element temperatures, orbit position leading to possible worst case surface distortions were determined, and the subsequent temperatures were used in a static finite element analysis to quantify surface control cord deflections. The predicted changes in the control cord lengths were in the submillimeter ranges.
A space-efficient quantum computer simulator suitable for high-speed FPGA implementation
NASA Astrophysics Data System (ADS)
Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel
2009-05-01
Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.
Investigation of the Human Response to Upper Torso Retraction with Weighted Helmets
2013-09-01
coverage of each test. The Kodak system is capable of recording high-speed motion up to a rate of 1000 frames per second. For this study , the video...the measured center-of-gravity (CG) of the worst- case test helmet fell outside the current limits and no injuries were observed, it can be stated...8 Figure 7. T-test Cases 1-9 (0 lb Added Helmet Weight
The Marine Corps Operating Concept: How an Expeditionary Force Operates in the 21st Century
2016-09-01
Moderator: Capt Pierce, you were a company commander on the MEU. How did OLR start for you? Well, up to the point that the enemy started crossing the...came in fast from long range on small boats. Worst beaches you’ve ever seen. Windward side, rocks everywhere, shoving boats in between the breakers...While the landing force didn’t get ashore unscathed, they mitigated the damage by using numerous small landing sites and emphasizing speed and
1980-08-01
tile se(q uenw threshold does not utilize thle D)C level inlforiat ion and the time thlresliolditig adaptively adjusts for DC lvel . This characteristic...lowest 256/8 = 32 elements. The above observation can be mathematically proven to also relate the fact that the lowest (NT/W) elements can, at worst case
ERIC Educational Resources Information Center
Fitzgerald, Patricia L.
1998-01-01
Although only 5% of the population has severe food allergies, school business officials must be prepared for the worst-case scenario. Banning foods and segregating allergic children are harmful practices. Education and sensible behavior are the best medicine when food allergies and intolerances are involved. Resources are listed. (MLH)
Shuttle ECLSS ammonia delivery capability
NASA Technical Reports Server (NTRS)
1976-01-01
The possible effects of excessive requirements on ammonia flow rates required for entry cooling, due to extreme temperatures, on mission plans for the space shuttles, were investigated. An analysis of worst case conditions was performed, and indicates that adequate flow rates are available. No mission impact is therefore anticipated.
41 CFR 102-80.145 - What is meant by “flashover”?
Code of Federal Regulations, 2010 CFR
2010-07-01
...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...
41 CFR 102-80.145 - What is meant by “flashover”?
Code of Federal Regulations, 2011 CFR
2011-01-01
...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, Brett C.; Less, Brennan D.; Delp, William W.
To inform efforts to improve combustion appliance testing in residential energy efficiency programs, we studied the frequency of coincident fan use and depressurization-induced downdrafting and spillage from atmospherically vented (i.e., natural draft) wall furnaces in airtight apartments. Indoor environmental conditions, heating appliance operation, use of exhaust fans, and cooking with stovetop or oven were monitored for approximately three weeks each in 16 apartment units in two buildings in Northern California. Apartments also were assessed using standard combustion appliance safety test methods and enhanced protocols. Monitoring occurred in February and March of 2016, with heating demand corresponding to 7.3 ± 0.5more » heating degree-days at a 65ºF reference temperature. Most of the furnaces spilled combustion products when the apartments were depressurized in the “worst-case” challenge condition of all exhaust fans operating at their highest settings and all windows closed. Many also spilled under less challenging conditions (e.g., with kitchen exhaust fan on low and bathroom fan operating). On average, bathroom exhaust fans were operated 3.9% of monitored minutes (13.5% max), and cooking (burner or kitchen fan operation) occurred 4.6% of minutes (max 13.3%). Event lengths averaged 17 minutes (max 540) and 34 minutes (max 324), respectively. Their coincident operation averaged 0.34% of minutes (max 2.0%), with average event length of 13 minutes (max 92 minutes). This suggests that the operation of apartment units at or near the currently used worst-case challenge condition is quite rare. Wall furnace burners operated an average of 2.8% of minutes (max of 8.9%), with average burner cycle length of 14 minutes (max 162). Coincident bath fan use, cooking and wall furnace operation was very rare, occurring only a handful of times across all apartments. The highest rate was 0.075% of monitored minutes in one apartment, and the longest event length was 12 minutes. Exhaust fan operation in this study may have been more frequent than typical as participants were asked to use an exhaust fan whenever cooking or bathing. Consistent with the low levels of coincident operation, unambiguous spillage occurred in only 4 apartments and the longest event was 5 minutes. The frequency of partial spillage is unknown, owing to a lack of a clear signal from monitored parameters. Downdrafting during exhaust fan use occurred in all 13 of the apartments with relevant data, and 9 of these units had 10 or more events. Exhaust fans also sometimes led to weakened draft, even if downdrafting did not occur. Each unambiguous spillage event identified in the study was immediately preceded by downdrafting. The observed occurrence of downdrafting and spillage may have been impacted in those apartments with the most severe drafting problems (i.e., appliances spilled combustion pollutants under ‘natural’ test conditions), because occupants in these units were instructed to open windows whenever using the kitchen exhaust fan.« less
40 CFR 266.106 - Standards to control metals emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...
40 CFR 266.106 - Standards to control metals emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...
The off-site consequence analysis (OCA) evaluates the potential for worst-case and alternative accidental release scenarios to harm the public and environment around the facility. Public disclosure would likely reduce the number/severity of incidents.
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2011 CFR
2011-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2010 CFR
2010-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2010 CFR
2010-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2011 CFR
2011-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2012 CFR
2012-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2013 CFR
2013-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2014 CFR
2014-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2013 CFR
2013-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2014 CFR
2014-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2012 CFR
2012-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
Competitive Strategies and Financial Performance of Small Colleges
ERIC Educational Resources Information Center
Barron, Thomas A., Jr.
2017-01-01
Many institutions of higher education are facing significant financial challenges, resulting in diminished economic viability and, in the worst cases, the threat of closure (Moody's Investor Services, 2015). The study was designed to explore the effectiveness of competitive strategies for small colleges in terms of financial performance. Five…
30 CFR 254.21 - How must I format my response plan?
Code of Federal Regulations, 2010 CFR
2010-07-01
... divide your response plan for OCS facilities into the sections specified in paragraph (b) and explained in the other sections of this subpart. The plan must have an easily found marker identifying each.... (ii) Contractual agreements. (iii) Worst case discharge scenario. (iv) Dispersant use plan. (v) In...
Safety in the Chemical Laboratory: Laboratory Air Quality: Part I. A Concentration Model.
ERIC Educational Resources Information Center
Butcher, Samuel S.; And Others
1985-01-01
Offers a simple model for estimating vapor concentrations in instructional laboratories. Three methods are described for measuring ventilation rates, and the results of measurements in six laboratories are presented. The model should provide a simple screening tool for evaluating worst-case personal exposures. (JN)
A Didactic Analysis of Functional Queues
ERIC Educational Resources Information Center
Rinderknecht, Christian
2011-01-01
When first introduced to the analysis of algorithms, students are taught how to assess the best and worst cases, whereas the mean and amortized costs are considered advanced topics, usually saved for graduates. When presenting the latter, aggregate analysis is explained first because it is the most intuitive kind of amortized analysis, often…
Carstens, Keri; Anderson, Jennifer; Bachman, Pamela; De Schrijver, Adinda; Dively, Galen; Federici, Brian; Hamer, Mick; Gielkens, Marco; Jensen, Peter; Lamp, William; Rauschen, Stefan; Ridley, Geoff; Romeis, Jörg; Waggoner, Annabel
2012-08-01
Environmental risk assessments (ERA) support regulatory decisions for the commercial cultivation of genetically modified (GM) crops. The ERA for terrestrial agroecosystems is well-developed, whereas guidance for ERA of GM crops in aquatic ecosystems is not as well-defined. The purpose of this document is to demonstrate how comprehensive problem formulation can be used to develop a conceptual model and to identify potential exposure pathways, using Bacillus thuringiensis (Bt) maize as a case study. Within problem formulation, the insecticidal trait, the crop, the receiving environment, and protection goals were characterized, and a conceptual model was developed to identify routes through which aquatic organisms may be exposed to insecticidal proteins in maize tissue. Following a tiered approach for exposure assessment, worst-case exposures were estimated using standardized models, and factors mitigating exposure were described. Based on exposure estimates, shredders were identified as the functional group most likely to be exposed to insecticidal proteins. However, even using worst-case assumptions, the exposure of shredders to Bt maize was low and studies supporting the current risk assessments were deemed adequate. Determining if early tier toxicity studies are necessary to inform the risk assessment for a specific GM crop should be done on a case by case basis, and should be guided by thorough problem formulation and exposure assessment. The processes used to develop the Bt maize case study are intended to serve as a model for performing risk assessments on future traits and crops.
Faerber, Julia; Cummins, Gerard; Pavuluri, Sumanth Kumar; Record, Paul; Rodriguez, Adrian R Ayastuy; Lay, Holly S; McPhillips, Rachael; Cox, Benjamin F; Connor, Ciaran; Gregson, Rachael; Clutton, Richard Eddie; Khan, Sadeque Reza; Cochran, Sandy; Desmulliez, Marc P Y
2018-02-01
This paper describes the design, fabrication, packaging, and performance characterization of a conformal helix antenna created on the outside of a capsule endoscope designed to operate at a carrier frequency of 433 MHz within human tissue. Wireless data transfer was established between the integrated capsule system and an external receiver. The telemetry system was tested within a tissue phantom and in vivo porcine models. Two different types of transmission modes were tested. The first mode, replicating normal operating conditions, used data packets at a steady power level of 0 dBm, while the capsule was being withdrawn at a steady rate from the small intestine. The second mode, replicating the worst-case clinical scenario of capsule retention within the small bowel, sent data with stepwise increasing power levels of -10, 0, 6, and 10 dBm, with the capsule fixed in position. The temperature of the tissue surrounding the external antenna was monitored at all times using thermistors embedded within the capsule shell to observe potential safety issues. The recorded data showed, for both modes of operation, a low error transmission of 10 -3 packet error rate and 10 -5 bit error rate and no temperature increase of the tissue according to IEEE standards.
Scattered UV irradiation during VISX excimer laser keratorefractive surgery.
Hope, R J; Weber, E D; Bower, K S; Pasternak, J P; Sliney, D H
2008-04-01
To evaluate the potential occupational health hazards associated with scattered ultraviolet (UV) radiation during photorefractive keratectomy (PRK) using the VISX Star S3 excimer laser. The Laser Vision Center, National Naval Medical Center, Bethesda, Maryland, USA. Intraoperative radiometric measurements were made with the Ophir Power/Energy Meter (LaserStar Model PD-10 with silicon detector) during PRK treatments as well as during required calibration procedures at a distance of 20.3 cm from the left cornea. These measurements were evaluated using a worst-case scenario for exposure, and then compared with the American Conference of Governmental Industrial Hygeinists (ACGIH) Threshold Value Limits (TVL) to perform a risk/hazard analysis. During the PRK procedures, the highest measured value was 248.4 nJ/pulse. During the calibration procedures, the highest measured UV scattered radiation level was 149.6 nJ/pulse. The maximum treatment time was 52 seconds. Using a worst-case scenario in which all treatments used the maximum power and time, the total energy per eye treated was 0.132 mJ/cm2 and the total UV radiation at close range (80 cm from the treated eye) was 0.0085 mJ/cm2. With a workload of 20 patients, the total occupational exposure at 80 cm to actinic UV radiation in an 8-hour period would be 0.425 mJ/cm2. The scattered actinic UV laser radiation from the VISX Star S3 excimer laser did not exceed occupational exposure limits during a busy 8-hour workday, provided that operating room personnel were at least 80 cm from the treated eye. While the use of protective eyewear is always prudent, this study demonstrates that the trace amounts of scattered laser emissions produced by this laser do not pose a serious health risk even without the use of protective eyewear.
2000-08-01
forefoot with the foot in the neutral position, and (b) similar to (a) but with heel landing. Although the authors reported no absolute strain values...diameter of sensors (or, in the case of a rectangular sensor, width as measured along pin axis). Worst case : Strike line from inside edges of sensors...potoroo it is just prior to "toe strike ". The locomotion of the potoroo is described as digitigrade, unlike humans, who walk in a plantigrade manner
ERIC Educational Resources Information Center
Goldstein, Philip J.
2009-01-01
The phrase "worst since the Great Depression" has seemingly punctuated every economic report. The United States is experiencing the worst housing market, the worst unemployment level, and the worst drop in gross domestic product since the Great Depression. Although the steady drumbeat of bad news may have made everyone nearly numb, one…
Socolovsky, Mariano; Di Masi, Gilda; Binaghi, Daniela; Campero, Alvaro; Páez, Miguel Domínguez; Dubrovsky, Alberto
2014-01-01
Thoracic Outlet Syndrome is a compression of the brachial plexus that remains highly controversial. Classification in True or Neurogenic Outlet (TTO) and Disputed or Non-neurogenic Outlet (DTO) is becoming very popular. The former is characterized by a muscular atrophy of the intrinsic muscles of the hand, while the latter has only sensitive symptoms. The purpose of this article is to analyze the results obtained in a series of 31 patients. All patients with diagnosis of Thoracic Outlet operated between January 2003 and December 2012 with a minimum follow-up of six months where included. Age, sex, symptoms, classification, preoperative studies results, complications and recurrences were analyzed. 31 surgeries performed in 30 patients, 9 with TTO (8 women, mean age 24.3 years) and 21 with DTO (18 women, mean age 37.4 years, 1 recurrence) were included. Ninety percent of patients presented neurophysiological and 66.6% imagenological preoperative disturbances. All TTO and only 36.7% of DTO showed clear pathological findings during surgical exploration. A high percentage (87,5% sensitive and 77.7% motor) of TTO ameliorated after surgical decompression. Only 45.5% of DTO showed permanent positive changes, 13.6% temporary, 36.6% no changes, and 4.5%(one case) showed deterioration after decompresive surgery. Complications after surgery were more frequent –but temporary- in TTO cases (33.3%), than in DTO (13.6%). TTO showed a favorable outcome after surgery. DTO showed a worst –but still positive- postoperative result if patients are selected properly. These data are in concordance with other recent reports.
The lionfish Pterois sp. invasion: Has the worst-case scenario come to pass?
Côté, I M; Smith, N S
2018-03-01
This review revisits the traits thought to have contributed to the success of Indo-Pacific lionfish Pterois sp. as an invader in the western Atlantic Ocean and the worst-case scenario about their potential ecological effects in light of the more than 150 studies conducted in the past 5 years. Fast somatic growth, resistance to parasites, effective anti-predator defences and an ability to circumvent predator recognition mechanisms by prey have probably contributed to rapid population increases of lionfish in the invaded range. However, evidence that lionfish are strong competitors is still ambiguous, in part because demonstrating competition is challenging. Geographic spread has likely been facilitated by the remarkable capacity of lionfish for prolonged fasting in combination with other broad physiological tolerances. Lionfish have had a large detrimental effect on native reef-fish populations in the northern part of the invaded range, but similar effects have yet to be seen in the southern Caribbean. Most other envisaged direct and indirect consequences of lionfish predation and competition, even those that might have been expected to occur rapidly, such as shifts in benthic composition, have yet to be realized. Lionfish populations in some of the first areas invaded have started to decline, perhaps as a result of resource depletion or ongoing fishing and culling, so there is hope that these areas have already experienced the worst of the invasion. In closing, we place lionfish in a broader context and argue that it can serve as a new model to test some fundamental questions in invasion ecology. © 2018 The Fisheries Society of the British Isles.
Kiatpongsan, Sorapop; Kim, Jane J
2014-01-01
Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15-30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain.
Kiatpongsan, Sorapop; Kim, Jane J.
2014-01-01
Background Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15–30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). Methods and Findings The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. Conclusions This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain. PMID:25198104
Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia
2018-06-01
The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.
NASA Technical Reports Server (NTRS)
Guman, W. J. (Editor)
1971-01-01
Thermal vacuum design supporting thruster tests indicate no problems under the worst case conditions of sink temperature and spin rate. The reliability of the system was calculated to be 0.92 for a five-year mission. Minus the main energy storage capacitor it is 0.98.
40 CFR 300.320 - General pattern of response.
Code of Federal Regulations, 2010 CFR
2010-07-01
...., substantial threat to the public health or welfare of the United States, worst case discharge) of the... private party efforts, and where the discharge does not pose a substantial threat to the public health or... 40 Protection of Environment 27 2010-07-01 2010-07-01 false General pattern of response. 300.320...
Small Wars 2.0: A Working Paper on Land Force Planning After Iraq and Afghanistan
2011-02-01
official examination of future ground combat demands that look genetically distinct from those undertaken in the name of the WoT. The concept of...under the worst-case rubric but for very different reasons. The latter are small wars. However, that by no means aptly describes their size
The +vbar breakout during approach to Space Station Freedom
NASA Technical Reports Server (NTRS)
Dunham, Scott D.
1993-01-01
A set of burn profiles was developed to provide bounding jet firing histories for a +vbar breakout during approaches to Space Station Freedom. The delta-v sequences were designed to place the Orbiter on a safe trajectory under worst case conditions and to try to minimize plume impingement on Space Station Freedom structure.
Providing Exemplars in the Learning Environment: The Case For and Against
ERIC Educational Resources Information Center
Newlyn, David
2013-01-01
Contemporary education has moved towards the requirement of express articulation of assessment criteria and standards in an attempt to provide legitimacy in the measurement of student performance/achievement. Exemplars are provided examples of best or worst practice in the educational environment, which are designed to assist students to increase…
Ageing of Insensitive DNAN Based Melt-Cast Explosives
2014-08-01
diurnal cycle (representative of the MEAO climate). Analysis of the ingredient composition, sensitiveness, mechanical and thermal properties was...first test condition was chosen to provide a worst-case scenario. Analysis of the ingredient composition, theoretical maximum density, sensitiveness...5 4.1.1 ARX-4027 Ingredient Analysis .............................................................. 5 4.1.2 ARX-4028 Ingredient Analysis
Power Analysis for Anticipated Non-Response in Randomized Block Designs
ERIC Educational Resources Information Center
Pustejovsky, James E.
2011-01-01
Recent guidance on the treatment of missing data in experiments advocates the use of sensitivity analysis and worst-case bounds analysis for addressing non-ignorable missing data mechanisms; moreover, plans for the analysis of missing data should be specified prior to data collection (Puma et al., 2009). While these authors recommend only that…
Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground
ERIC Educational Resources Information Center
Oberg, Gunilla
2009-01-01
Newcomers often underestimate the challenges of interdisciplinary work and, as a rule, do not spend sufficient time to allow them to overcome differences and create common ground, which in turn leads to frustration, unresolved conflicts, and, in the worst case scenario, discontinued work. The key to successful collaboration is to facilitate the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-14
... notice is provided in accordance with the Council on Environmental Quality's regulations (40 CFR parts... interconnected, fabric-lined, sand-filled HESCO containers in order to safely pass predicted worst-case..., but will not necessarily be limited to, the potential impacts on water quality, aquatic and...
ERIC Educational Resources Information Center
Tercek, Patricia M.
This practicum study examined kindergarten teachers' perspectives regarding mixed-age groupings that included kindergarten students. The study focused on pedagogical reasons for using mixed-age grouping, ingredients necessary for successful implementation of a multiage program that includes kindergartners, and the perceived effects of a multiage…
Case Study: POLYTECH High School, Woodside, Delaware.
ERIC Educational Resources Information Center
Southern Regional Education Board, Atlanta, GA.
POLYTECH High School in Woodside, Delaware, has gone from being among the worst schools in the High Schools That Work (HSTW) network to among the best. Polytech, which is now a full-time technical high school, has improved its programs and outcomes by implementing a series of organizational, curriculum, teaching, guidance, and leadership changes,…
NASA Astrophysics Data System (ADS)
Kocan, M.; Garcia-Munoz, M.; Ayllon-Guerola, J.; Bertalot, L.; Bonnet, Y.; Casal, N.; Galdon, J.; Garcia-Lopez, J.; Giacomin, T.; Gonzalez-Martin, J.; Gunn, J. P.; Rodriguez-Ramos, M.; Reichle, R.; Rivero-Rodriguez, J. F.; Sanchis-Sanchez, L.; Vayakis, G.; Veshchev, E.; Vorpahl, C.; Walsh, M.; Walton, R.
2017-12-01
Thermal plasma loads to the ITER Fast Ion Loss Detector are studied for QDT = 10 burning plasma equilibrium using the 3D field line tracing. The simulations are performed for a FILD insertion 9-13 cm past the port plasma facing surface, optimized for fast ion measurements, and include the worst-case perturbation of the plasma boundary and the error in the magnetic reconstruction. The FILD head is exposed to superimposed time-averaged ELM heat load, static inter-ELM heat flux and plasma radiation. The study includes the estimate of the instantaneous temperature rise due to individual 0.6 MJ controlled ELMs. The maximum time-averaged surface heat load is lesssim 12 MW/m2 and will lead to increase of the FILD surface temperature well below the melting temperature of the materials considered here, for the FILD insertion time of 0.2 s. The worst-case instantaneous temperature rise during controlled 0.6 MJ ELMs is also significantly smaller than the melting temperature of e.g. Tungsten or Molybdenum, foreseen for the FILD housing.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-03-01
The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America
Locke, Sarah J; Deziel, Nicole C; Koh, Dong-Hee; Graubard, Barry I; Purdue, Mark P; Friesen, Melissa C
2017-02-01
We evaluated predictors of differences in published occupational lead concentrations for activities disturbing material painted with or containing lead in U.S. workplaces to aid historical exposure reconstruction. For the aforementioned tasks, 221 air and 113 blood lead summary results (1960-2010) were extracted from a previously developed database. Differences in the natural log-transformed geometric mean (GM) for year, industry, job, and other ancillary variables were evaluated in meta-regression models that weighted each summary result by its inverse variance and sample size. Air and blood lead GMs declined 5%/year and 6%/year, respectively, in most industries. Exposure contrast in the GMs across the nine jobs and five industries was higher based on air versus blood concentrations. For welding activities, blood lead GMs were 1.7 times higher in worst-case versus non-worst case scenarios. Job, industry, and time-specific exposure differences were identified; other determinants were too sparse or collinear to characterize. Am. J. Ind. Med. 60:189-197, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Housing for the "Worst of the Worst" Inmates: Public Support for Supermax Prisons
ERIC Educational Resources Information Center
Mears, Daniel P.; Mancini, Christina; Beaver, Kevin M.; Gertz, Marc
2013-01-01
Despite concerns whether supermaximum security prisons violate human rights or prove effective, these facilities have proliferated in America over the past 25 years. This punishment--aimed at the "worst of the worst" inmates and involving 23-hr-per-day single-cell confinement with few privileges or services--has emerged despite little…
Kameda, Tatsuya; Inukai, Keigo; Higuchi, Satomi; Ogawa, Akitoshi; Kim, Hackjin; Matsuda, Tetsuya; Sakagami, Masamichi
2016-01-01
Distributive justice concerns the moral principles by which we seek to allocate resources fairly among diverse members of a society. Although the concept of fair allocation is one of the fundamental building blocks for societies, there is no clear consensus on how to achieve “socially just” allocations. Here, we examine neurocognitive commonalities of distributive judgments and risky decisions. We explore the hypothesis that people’s allocation decisions for others are closely related to economic decisions for oneself at behavioral, cognitive, and neural levels, via a concern about the minimum, worst-off position. In a series of experiments using attention-monitoring and brain-imaging techniques, we investigated this “maximin” concern (maximizing the minimum possible payoff) via responses in two seemingly disparate tasks: third-party distribution of rewards for others, and choosing gambles for self. The experiments revealed three robust results: (i) participants’ distributive choices closely matched their risk preferences—“Rawlsians,” who maximized the worst-off position in distributions for others, avoided riskier gambles for themselves, whereas “utilitarians,” who favored the largest-total distributions, preferred riskier but more profitable gambles; (ii) across such individual choice preferences, however, participants generally showed the greatest spontaneous attention to information about the worst possible outcomes in both tasks; and (iii) this robust concern about the minimum outcomes was correlated with activation of the right temporoparietal junction (RTPJ), the region associated with perspective taking. The results provide convergent evidence that social distribution for others is psychologically linked to risky decision making for self, drawing on common cognitive–neural processes with spontaneous perspective taking of the worst-off position. PMID:27688764
Hydraulic Fracturing of Soils; A Literature Review.
1977-03-01
best case, or worst case. The study reported herein is an overview of one such test or technique, hydraulic fracturing , which is defined as the...formation of cracks, in soil by the application of hydraulic pressure greater than the minor principal stress at that point. Hydraulic fracturing , as a... hydraulic fracturing as a means for determination of lateral stresses, the technique can still be used for determining in situ total stress and permeability at a point in a cohesive soil.
Liu, Yanjun; Liu, Yanting; Li, Hao; Fu, Xindi; Guo, Hanwen; Meng, Ruihong; Lu, Wenjing; Zhao, Ming; Wang, Hongtao
2016-12-01
Aromatic compounds (ACs) emitted from landfills have attracted a lot of attention of the public due to their adverse impacts on the environment and human health. This study assessed the health risk impacts of the fugitive ACs emitted from the working face of a municipal solid waste (MSW) landfill in China. The emission data was acquired by long-term in-situ samplings using a modified wind tunnel system. The uncertainty of aromatic emissions is determined by means of statistics and the emission factors were thus developed. Two scenarios, i.e. 'normal-case' and 'worst-case', were presented to evaluate the potential health risk in different weather conditions. For this typical large anaerobic landfill, toluene was the dominant species owing to its highest releasing rate (3.40±3.79g·m -2 ·d -1 ). Despite being of negligible non-carcinogenic risk, the ACs might bring carcinogenic risks to human in the nearby area. Ethylbenzene was the major health threat substance. The cumulative carcinogenic risk impact area is as far as ~1.5km at downwind direction for the normal-case scenario, and even nearly 4km for the worst-case scenario. Health risks of fugitive ACs emissions from active landfills should be concerned, especially for landfills which still receiving mixed MSW. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kruser, Jacqueline M; Nabozny, Michael J; Steffens, Nicole M; Brasel, Karen J; Campbell, Toby C; Gaines, Martha E; Schwarze, Margaret L
2015-09-01
To evaluate a communication tool called "Best Case/Worst Case" (BC/WC) based on an established conceptual model of shared decision-making. Focus group study. Older adults (four focus groups) and surgeons (two focus groups) using modified questions from the Decision Aid Acceptability Scale and the Decisional Conflict Scale to evaluate and revise the communication tool. Individuals aged 60 and older recruited from senior centers (n = 37) and surgeons from academic and private practices in Wisconsin (n = 17). Qualitative content analysis was used to explore themes and concepts that focus group respondents identified. Seniors and surgeons praised the tool for the unambiguous illustration of multiple treatment options and the clarity gained from presentation of an array of treatment outcomes. Participants noted that the tool provides an opportunity for in-the-moment, preference-based deliberation about options and a platform for further discussion with other clinicians and loved ones. Older adults worried that the format of the tool was not universally accessible for people with different educational backgrounds, and surgeons had concerns that the tool was vulnerable to physicians' subjective biases. The BC/WC tool is a novel decision support intervention that may help facilitate difficult decision-making for older adults and their physicians when considering invasive, acute medical treatments such as surgery. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.
NASA Astrophysics Data System (ADS)
Krien, Yann; Dudon, Bernard; Roger, Jean; Arnaud, Gael; Zahibo, Narcisse
2017-09-01
In the Lesser Antilles, coastal inundations from hurricane-induced storm surges pose a great threat to lives, properties and ecosystems. Assessing current and future storm surge hazards with sufficient spatial resolution is of primary interest to help coastal planners and decision makers develop mitigation and adaptation measures. Here, we use wave-current numerical models and statistical methods to investigate worst case scenarios and 100-year surge levels for the case study of Martinique under present climate or considering a potential sea level rise. Results confirm that the wave setup plays a major role in the Lesser Antilles, where the narrow island shelf impedes the piling-up of large amounts of wind-driven water on the shoreline during extreme events. The radiation stress gradients thus contribute significantly to the total surge - up to 100 % in some cases. The nonlinear interactions of sea level rise (SLR) with bathymetry and topography are generally found to be relatively small in Martinique but can reach several tens of centimeters in low-lying areas where the inundation extent is strongly enhanced compared to present conditions. These findings further emphasize the importance of waves for developing operational storm surge warning systems in the Lesser Antilles and encourage caution when using static methods to assess the impact of sea level rise on storm surge hazard.
Primary Spinal Cord Melanoma: A Case Report and a Systemic Review of Overall Survival.
Zhang, Mingzhe; Liu, Raynald; Xiang, Yi; Mao, Jianhui; Li, Guangjie; Ma, Ronghua; Sun, Zhaosheng
2018-06-01
The incidence of primary spinal cord melanoma (PSCM) is rare. Several case series and case reports have been published in the literature. However, the predictive factors of PSCM survival and management options are not discussed in detail. We present a case of PSCM; total resection was achieved and chemotherapy was given postoperatively. A comprehensive search was performed on PubMed's electronic database using the words "primary spinal cord melanoma." Survival rates with various gender, location, treatment, and metastasis condition were collected from the published articles and analyzed. Fifty nine cases were eligible for the survival analysis; 54% were male and 46% were female. Patient sex did not influence overall survival. The most common location was the thorax. Patient sex and tumor location did not influence overall survival. The major presenting symptoms were weakness and paresthesia of the extremities. Metastasis or dissemination was noted in 45.16% of 31 patients. In the Kaplan-Meier survival analysis, patients who had metastasis had the worst prognosis. Extent of resection was not related to mortality. Patients who received surgery and surgery with adjuvant therapy had a better median survival than did those who had adjuvant therapy alone. Prognosis was worst in those patients who underwent only adjuvant therapy without surgery (5 months). Surgery is the first treatment of choice in treating PSCM. The goal of tumor resection is to reduce symptoms. Adjuvant therapy after surgery had a beneficial effect on limiting the metastasis. Copyright © 2018 Elsevier Inc. All rights reserved.
Magnetic force study for the helical afterburner for the European XFEL
NASA Astrophysics Data System (ADS)
Li, Peng; Wei, Tao; Li, Yuhui; Pflueger, Joachim
2017-05-01
At present the SASE3 undulator line at the European XFEL is using a planar undulator producing linear polarized soft Xray radiation only. In order to satisfy the demand for circular polarized radiation a helical undulator system, the so-called afterburner is in construction. It will be operated as a radiator using the pre-bunched beam of the SASE3 undulator system. Among several options for the magnetic structure the Apple-X geometry was chosen. This is a pure permanent magnet undulator using NdFeB material. Four magnet arrays are arranged symmetrically the beam axis. Polarization can be changed by adjusting the phase shift (PS) between the two orthogonal structures. The field strength can be adjusted either by gap adjustment or alternatively by the amplitude shift (AS) scheme. For an engineering design the maximum values of forces and torques on each of the components under worst case operational conditions are important. The superposition principle is used to reduce calculation time. It is found that the maximum forces Fx, Fy and Fz for a 2m long Apple-X undulator are 1.8*104N, 2.4*104N and 2.3*104N, respectively. More results are presented in this paper.
Conceptual design of multi-source CCS pipeline transportation network for Polish energy sector
NASA Astrophysics Data System (ADS)
Isoli, Niccolo; Chaczykowski, Maciej
2017-11-01
The aim of this study was to identify an optimal CCS transport infrastructure for Polish energy sector in regards of selected European Commission Energy Roadmap 2050 scenario. The work covers identification of the offshore storage site location, CO2 pipeline network design and sizing for deployment at a national scale along with CAPEX analysis. It was conducted for the worst-case scenario, wherein the power plants operate under full-load conditions. The input data for the evaluation of CO2 flow rates (flue gas composition) were taken from the selected cogeneration plant with the maximum electric capacity of 620 MW and the results were extrapolated from these data given the power outputs of the remaining units. A graph search algorithm was employed to estimate pipeline infrastructure costs to transport 95 MT of CO2 annually, which amount to about 612.6 M€. Additional pipeline infrastructure costs will have to be incurred after 9 years of operation of the system due to limited storage site capacity. The results show that CAPEX estimates for CO2 pipeline infrastructure cannot be relied on natural gas infrastructure data, since both systems exhibit differences in pipe wall thickness that affects material cost.
Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.
Mohammed, Nazmi A; Elkarim, Mohammed Abd
2015-08-10
This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.
Ilbäck, N-G; Alzin, M; Jahrl, S; Enghardt-Barbieri, H; Busk, L
2003-02-01
Few sweetener intake studies have been performed on the general population and only one study has been specifically designed to investigate diabetics and children. This report describes a Swedish study on the estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin by children (0-15 years) and adult male and female diabetics (types I and II) of various ages (16-90 years). Altogether, 1120 participants were asked to complete a questionnaire about their sweetener intake. The response rate (71%, range 59-78%) was comparable across age and gender groups. The most consumed 'light' foodstuffs were diet soda, cider, fruit syrup, table powder, table tablets, table drops, ice cream, chewing gum, throat lozenges, sweets, yoghurt and vitamin C. The major sources of sweetener intake were beverages and table powder. About 70% of the participants, equally distributed across all age groups, read the manufacturer's specifications of the food products' content. The estimated intakes showed that neither men nor women exceeded the ADI for acesulfame-K; however, using worst-case calculations, high intakes were found in young children (169% of ADI). In general, the aspartame intake was low. Children had the highest estimated (worst case) intake of cyclamate (317% of ADI). Children's estimated intake of saccharin only slightly exceeded the ADI at the 5% level for fruit syrup. Children had an unexpected high intake of tabletop sweeteners, which, in Sweden, is normally based on cyclamate. The study was performed during two winter months when it can be assumed that the intake of sweeteners was lower as compared with during warm, summer months. Thus, the present study probably underestimates the average intake on a yearly basis. However, our worst-case calculations based on maximum permitted levels were performed on each individual sweetener, although exposure is probably relatively evenly distributed among all sweeteners, except for cyclamate containing table sweeteners.
2013-01-01
Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. PMID:23816180
Martin, Olwenn V; Martin, Scholze; Kortenkamp, Andreas
2013-07-01
Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment.
Berger-Preiss, Edith; Koch, Wolfgang; Gerling, Susanne; Kock, Heiko; Appel, Klaus E
2009-09-01
Five commercially available insect sprays were applied in a model room. Spraying was performed in accordance with the manufacturers' instructions and in an overdosed manner in order to simulate worst-case conditions or an unforeseeable misuse. In addition, we examined electro-vaporizers. The Respicon aerosol monitoring system was applied to determine inhalation exposure. During normal spraying (10 seconds) and during the following 2-3 minutes, exposure concentrations ranged from 70 to 590 microg/m3 for the pyrethroids tetramethrin, d-phenothrin, cyfluthrin, bioallethrin, and the pyrethrins. Calculated inhalable doses were 2-16 microg. A concentration of approximately 850 microg chlorpyrifos/m(3) (inhalable dose: approximately 20 microg) was determined when the "Contra insect fly spray" was applied. Highest exposure concentrations (1100-2100 microg/m3) were measured for piperonyl butoxide (PBO), corresponding to an inhalation intake of 30-60microg. When simulating worst-case conditions, exposure concentrations of 200-3400microg/m3 and inhalable doses of 10-210microg were determined for the various active substances. Highest concentrations (4800-8000 microg/m3) were measured for PBO (inhalable: 290-480 microg). By applying the electro-vaporizer "Nexa Lotte" plug-in mosquito killer concentrations for d-allethrin were in the range of 5-12microg/m3 and 0.5-2 microg/m3 for PBO while with the "Paral" plug-in mosquito killer concentrations of 0.4-5microg/m3 for pyrethrins and 1-7 microg/m3 for PBO were measured. Potential dermal exposures were determined using exposure pads. Between 80 and 1000microg active substance (tetramethrin, phenothrin, cyfluthrin, bioallethrin, pyrethrins, chlorpyrifos) were deposited on the clothing of the total body surface area of the spray user. Highest levels (up to 3000 microg) were determined for PBO. Worst-case uses of the sprays led to 5-9 times higher concentrations. Also a 2-hour stay nearby an operating electro-vaporizer led to a contamination of the clothing (total amounts on the whole body were 450 microg d-allethrin and 50 microg PBO for "Nexa Lotte" plug-in mosquito killer and 80 microg pyrethrins and 190 microg PBO for "Paral" plug-in mosquito killer). Human biomonitoring data revealed urine concentrations of the metabolite (E)-trans-chrysanthemum dicarboxylic acid ((E)-trans-CDCA) between 1.7 microg/l and 7.1 microg/l after 5 minutes of exposure to the different sprays. Also the use of electro-vaporizers led to (E)-trans-CDCA concentrations in the urine in the range of 1.0 microg/l to 6.2 microg/l (1-3 hours exposure period). The exposure data presented can be used for performing human risk assessment when these biocidal products were applied indoors. The airborne concentrations of the non-volatile active chemical compounds could be predicted from first principles using a deterministic exposure model (SprayExpo).
Relaxing USOS Solar Array Constraints for Russian Vehicle Undocking
NASA Technical Reports Server (NTRS)
Menkin, Evgeny; Schrock, Mariusz; Schrock, Rita; Zaczek, Mariusz; Gomez, Susan; Lee, Roscoe; Bennet, George
2011-01-01
With the retirement of Space Shuttle cargo delivery capability and the ten year life extension of the International Space Station (ISS) more emphasis is being put on preservation of the service life of ISS critical components. Current restrictions on the United States Orbital Segment (USOS) Solar Array (SA) positioning during Russian Vehicle (RV) departure from ISS nadir and zenith ports cause SA to be positioned in the plume field of Service Module thrusters and lead to degradation of SAs as well as potential damage to Sun tracking Beta Gimbal Assemblies (BGA). These restrictions are imposed because of the single fault tolerant RV Motion Control System (MCS), which does not meet ISS Safety requirements for catastrophic hazards and dictates 16 degree Solar Array Rotary Joint position, which ensures that ISS and RV relative motion post separation, does lead to collision. The purpose of this paper is to describe a methodology and the analysis that was performed to determine relative motion trajectories of the ISS and separating RV for nominal and contingency cases. Analysis was performed in three phases that included ISS free drift prior to Visiting Vehicle separation, ISS and Visiting Vehicle relative motion analysis and clearance analysis. First, the ISS free drift analysis determined the worst case attitude and attitude rate excursions prior to RV separation based on a series of different configurations and mass properties. Next, the relative motion analysis calculated the separation trajectories while varying the initial conditions, such as docking mechanism performance, Visiting Vehicle MCS failure, departure port location, ISS attitude and attitude rates at the time of separation, etc. The analysis employed both orbital mechanics and rigid body rotation calculations while accounting for various atmospheric conditions and gravity gradient effects. The resulting relative motion trajectories were then used to determine the worst case separation envelopes during the clearance analysis. Analytical models were developed individually for each stage and the results were used to build initial conditions for the following stages. In addition to the analysis approach, this paper also discusses the analysis results, showing worst case relative motion envelopes, the recommendations for ISS appendage positioning and the suggested approach for future analyses.
Gabbe, Belinda J.; Harrison, James E.; Lyons, Ronan A.; Jolley, Damien
2011-01-01
Background Injury is a leading cause of the global burden of disease (GBD). Estimates of non-fatal injury burden have been limited by a paucity of empirical outcomes data. This study aimed to (i) establish the 12-month disability associated with each GBD 2010 injury health state, and (ii) compare approaches to modelling the impact of multiple injury health states on disability as measured by the Glasgow Outcome Scale – Extended (GOS-E). Methods 12-month functional outcomes for 11,337 survivors to hospital discharge were drawn from the Victorian State Trauma Registry and the Victorian Orthopaedic Trauma Outcomes Registry. ICD-10 diagnosis codes were mapped to the GBD 2010 injury health states. Cases with a GOS-E score >6 were defined as “recovered.” A split dataset approach was used. Cases were randomly assigned to development or test datasets. Probability of recovery for each health state was calculated using the development dataset. Three logistic regression models were evaluated: a) additive, multivariable; b) “worst injury;” and c) multiplicative. Models were adjusted for age and comorbidity and investigated for discrimination and calibration. Findings A single injury health state was recorded for 46% of cases (1–16 health states per case). The additive (C-statistic 0.70, 95% CI: 0.69, 0.71) and “worst injury” (C-statistic 0.70; 95% CI: 0.68, 0.71) models demonstrated higher discrimination than the multiplicative (C-statistic 0.68; 95% CI: 0.67, 0.70) model. The additive and “worst injury” models demonstrated acceptable calibration. Conclusions The majority of patients survived with persisting disability at 12-months, highlighting the importance of improving estimates of non-fatal injury burden. Additive and “worst” injury models performed similarly. GBD 2010 injury states were moderately predictive of recovery 1-year post-injury. Further evaluation using additional measures of health status and functioning and comparison with the GBD 2010 disability weights will be needed to optimise injury states for future GBD studies. PMID:21984951
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoke, Anderson; Nelson, Austin; Miller, Brian
As PV and other DER systems are connected to the grid at increased penetration levels, island detection may become more challenging for two reasons: 1.) In islands containing many DERs, active inverter-based anti-islanding methods may have more difficulty detecting islands because each individual inverter's efforts to detect the island may be interfered with by the other inverters in the island. 2.) The increasing numbers of DERs are leading to new requirements that DERs ride through grid disturbances and even actively try to regulate grid voltage and frequency back towards nominal operating conditions. These new grid support requirements may directly ormore » indirectly interfere with anti-islanding controls. This report describes a series of tests designed to examine the impacts of both grid support functions and multi-inverter islands on anti-islanding effectiveness. Crucially, the multi-inverter anti-islanding tests described in this report examine scenarios with multiple inverters connected to multiple different points on the grid. While this so-called 'solar subdivision' scenario has been examined to some extent through simulation, this is the first known work to test it using hardware inverters. This was accomplished through the use of power hardware-in-the-loop (PHIL) simulation, which allows the hardware inverters to be connected to a real-time transient simulation of an electric power system that can be easily reconfigured to test various distribution circuit scenarios. The anti-islanding test design was a modified version of the unintentional islanding test in IEEE Standard 1547.1, which creates a balanced, resonant island with the intent of creating a highly challenging condition for island detection. Three common, commercially available single-phase PV inverters from three different manufacturers were tested. The first part of this work examined each inverter individually using a series of pure hardware resistive-inductive-capacitive (RLC) resonant load based anti-islanding tests to determine the worst-case configuration of grid support functions for each inverter. A grid support function is a function an inverter performs to help stabilize the grid or drive the grid back towards its nominal operating point. The four grid support functions examined here were voltage ride-through, frequency ride-through, Volt-VAr control, and frequency-Watt control. The worst-case grid support configuration was defined as the configuration that led to the maximum island duration (or run-on time, ROT) out of 50 tests of each inverter. For each of the three inverters, it was observed that maximum ROT increased when voltage and frequency ride-through were activated. No conclusive evidence was found that Volt-VAr control or frequency-Watt control increased maximum ROT. Over all single-inverter test cases, the maximum ROT was 711 ms, well below the two-second limit currently imposed by IEEE Standard 1547-2003. A subsequent series of 244 experiments tested all three inverters simultaneously in the same island. These tests again used a procedure based on the IEEE 1547.1 unintentional islanding test to create a difficult-to-detect island condition. For these tests, which used the two worst-case grid support function configurations from the single-inverter tests, the inverters were connected to a variety of island circuit topologies designed to represent the variety of multiple-inverter islands that may occur on real distribution circuits. The interconnecting circuits and the resonant island load itself were represented in the real-time PHIL model. PHIL techniques similar to those employed here have been previously used and validated for anti-islanding tests, and the PHIL resonant load model used in this test was successfully validated by comparing single-inverter PHIL tests to conventional tests using an RLC load bank.« less
Robust media processing on programmable power-constrained systems
NASA Astrophysics Data System (ADS)
McVeigh, Jeff
2005-03-01
To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.
Dark current and radiation shielding studies for the ILC main linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mokhov, Nikolai V.; Rakhno, I. L.; Solyak, N. A.
2016-12-05
Electrons of dark current (DC), generated in high-gradient superconducting RF cavities (SRF) due to field emission, can be accelerated up to very high energies—19 GeV in the case of the International Linear Collider (ILC) main linac—before they are removed by focusing and steering magnets. Electromagnetic and hadron showers generated by such electrons can represent a significant radiation threat to the linac equipment and personnel. In our study, an operational scenario is analysed which is believed can be considered as the worst case scenario for the main linac regarding the DC contribution to the radiation environment in the main linac tunnel.more » A detailed modelling is performed for the DC electrons which are emitted from the surface of the SRF cavities and can be repeatedly accelerated in the high-gradient fields in many SRF cavities. Results of MARS15 Monte Carlo calculations, performed for the current main linac tunnel design, reveal that the prompt dose design level of 25 μSv/hr in the service tunnel can be provided by a 2.3-m thick concrete wall between the main and service ls.« less
Characterization of Lunar Polar Illumination from a Power System Perspective
NASA Technical Reports Server (NTRS)
Fincannon, James
2008-01-01
This paper presents the results of illumination analyses for the lunar south and north pole regions obtained using an independently developed analytical tool and two types of digital elevation models (DEM). One DEM was based on radar height data from Earth observations of the lunar surface and the other was a combination of the radar data with a separate dataset generated using Clementine spacecraft stereo imagery. The analysis tool enables the assessment of illumination at most locations in the lunar polar regions for any time and any year. Maps are presented for both lunar poles for the worst case winter period (the critical power system design and planning bottleneck) and for the more favorable best case summer period. Average illumination maps are presented to help understand general topographic trends over the regions. Energy storage duration maps are presented to assist in power system design. Average illumination fraction, energy storage duration, solar/horizon terrain elevation profiles and illumination fraction profiles are presented for favorable lunar north and south pole sites which have the potential for manned or unmanned spacecraft operations. The format of the data is oriented for use by power system designers to develop mass optimized solar and energy storage systems.
2016-11-01
low- power RF transmissions used by the OBAN system. B. Threat Analysis Methodology To analyze the risk presented by a particular threat we use a... power efficiency5 and in the absolute worst case a compromise of the wireless channel could result in death. Fitness trackers on the other hand are...analysis is intended to inform the development of secure RT-PSM architectures. I. INTRODUCTION The development of very low- power computing devices and
Del Vecchio, Fabrício Boscolo; Franchini, Emerson
2013-08-01
This response to Amtmann's letter emphasizes that the knowledge of the typical time structure, as well as its variation, together with the main goal of the mixed martial arts athletes--to win by knock out or submission--need to be properly considered during the training sessions. Example with other combat sports are given and discussed, especially concerning the importance of adapting the physical conditioning workouts to the technical-tactical profile of the athlete and not the opposite.
The reduction of a ""safety catastrophic'' potential hazard: A case history
NASA Technical Reports Server (NTRS)
Jones, J. P.
1971-01-01
A worst case analysis is reported on the safety of time watch movements for triggering explosive packages on the lunar surface in an experiment to investigate physical lunar structural characteristics through induced seismic energy waves. Considered are the combined effects of low pressure, low temperature, lunar gravity, gear train error, and position. Control measures constitute a seal control cavity and design requirements to prevent overbanking in the mainspring torque curve. Thus, the potential hazard is reduced to safety negligible.
Code of Federal Regulations, 2011 CFR
2011-07-01
... spill mitigation procedures. (i) This subsection must describe the volume(s) and oil groups that would... applicable, the worst case discharge from the non-transportation-related facility. This must be the same volume provided in the response plan for the non-transportation-related facility. (ii) This subsection...
USDA-ARS?s Scientific Manuscript database
Floods have negative impacts on society, causing damages in infrastructures and industry, and in the worst cases, causing loss of human lives. Thus early and accurate warning is crucial to significantly reduce the impacts on public safety and economy. Reliable flood warning can be generated using ...
Evaluation of Bias Correction Methods for "Worst-case" Selective Non-participation in NAEP
ERIC Educational Resources Information Center
McLaughlin, Don; Gallagher, Larry; Stancavage, Fran
2004-01-01
With the advent of No Child Left Behind (NCLB), the context for NAEP participation is changing. Whereas in the past participation in NAEP has always been voluntary, participation is now mandatory for some grade and subjects among schools receiving Title I funds. While this will certainly raise school-level participation rates in the mandated…
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...
Algorithm Diversity for Resilent Systems
2016-06-27
data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different
ERIC Educational Resources Information Center
Magnuson, Peter
2013-01-01
Imagine a football team without a quarterback. Imagine a ship without a captain. Imagine a kitchen without a chef. Right now, you are probably running a number of worst-case scenarios through your head: a losing season; a ship adrift; some not-so-tasty cookies. The reason your mind has conjured up these end results is because in each of these…
ERIC Educational Resources Information Center
Magee, Michael
2014-01-01
In 2007, the case could be made that Rhode Island had, dollar for dollar, the worst-performing public education system in the United States. Despite per-pupil expenditures ranking in the top 10 nationally, the state's 8th graders fared no better than 40th in reading and 33rd in math on the National Assessment of Educational Progress (NAEP). Only…
Improved Multiple-Species Cyclotron Ion Source
NASA Technical Reports Server (NTRS)
Soli, George A.; Nichols, Donald K.
1990-01-01
Use of pure isotope 86Kr instead of natural krypton in multiple-species ion source enables source to produce krypton ions separated from argon ions by tuning cylcotron with which source used. Addition of capability to produce and separate krypton ions at kinetic energies of 150 to 400 MeV necessary for simulation of worst-case ions occurring in outer space.
States' Fiscal Woes Raise Anxiety Level on School Budgets
ERIC Educational Resources Information Center
Zehr, Mary Ann
2007-01-01
A long-projected revenue chill is beginning to bite in a number of states, putting pressure on education policymakers to defend existing programs--and, in some cases, forcing them to prepare for the worst if budget cuts become a reality. The causes vary, from slack property-tax receipts in Florida to a chronically sluggish economy in Michigan. But…
Real Time Energy Management Control Strategies for Hybrid Powertrains
NASA Astrophysics Data System (ADS)
Zaher, Mohamed Hegazi Mohamed
In order to improve fuel efficiency and reduce emissions of mobile vehicles, various hybrid power-train concepts have been developed over the years. This thesis focuses on embedded control of hybrid powertrain concepts for mobile vehicle applications. Optimal robust control approach is used to develop a real time energy management strategy for continuous operations. The main idea is to store the normally wasted mechanical regenerative energy in energy storage devices for later usage. The regenerative energy recovery opportunity exists in any condition where the speed of motion is in opposite direction to the applied force or torque. This is the case when the vehicle is braking, decelerating, or the motion is driven by gravitational force, or load driven. There are three main concepts for regernerative energy storing devices in hybrid vehicles: electric, hydraulic, and flywheel. The real time control challenge is to balance the system power demand from the engine and the hybrid storage device, without depleting the energy storage device or stalling the engine in any work cycle, while making optimal use of the energy saving opportunities in a given operational, often repetitive cycle. In the worst case scenario, only engine is used and hybrid system completely disabled. A rule based control is developed and tuned for different work cycles and linked to a gain scheduling algorithm. A gain scheduling algorithm identifies the cycle being performed by the machine and its position via GPS, and maps them to the gains.
Humidity Testing for Human Rated Spacecraft
NASA Technical Reports Server (NTRS)
Johnson, Gary B.
2009-01-01
Determination that equipment can operate in and survive exposure to the humidity environments unique to human rated spacecraft presents widely varying challenges. Equipment may need to operate in habitable volumes where the atmosphere contains perspiration, exhalation, and residual moisture. Equipment located outside the pressurized volumes may be exposed to repetitive diurnal cycles that may result in moisture absorption and/or condensation. Equipment may be thermally affected by conduction to coldplate or structure, by forced or ambient air convection (hot/cold or wet/dry), or by radiation to space through windows or hatches. The equipment s on/off state also contributes to the equipment s susceptibility to humidity. Like-equipment is sometimes used in more than one location and under varying operational modes. Due to these challenges, developing a test scenario that bounds all physical, environmental and operational modes for both pressurized and unpressurized volumes requires an integrated assessment to determine the "worst-case combined conditions." Such an assessment was performed for the Constellation program, considering all of the aforementioned variables; and a test profile was developed based on approximately 300 variable combinations. The test profile has been vetted by several subject matter experts and partially validated by testing. Final testing to determine the efficacy of the test profile on actual space hardware is in the planning stages. When validation is completed, the test profile will be formally incorporated into NASA document CxP 30036, "Constellation Environmental Qualification and Acceptance Testing Requirements (CEQATR)."
Environmental Impact From Accelerator Operation at SLAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, James C
1999-03-22
Environmental impacts from electron accelerator operations at the Stanford Linear Accelerator Center, which is located near populated areas, are illustrated by using examples of three different accelerator facilities: the low power (a few watts) SSRL, the high power (a few kilowatts) PEP-II, and the 50-kW SLC. Three types of major impacts are discussed: (1) off-site doses from skyshine radiation, mainly neutrons, (2) off-site doses from radioactive air emission, mainly {sup 13}N, and (3) radioactivities, mainly {sup 3}H, produced in the groundwater. It was found that, from SSRL operation, the skyshine radiation result in a MEI (Maximum Exposed Individual) of 0.3more » {mu}Sv/y while a conservative calculation using CAP88 showed a MEI of 0.36 {mu}Sv/y from radioactive air releases. The calculated MEI doses due to future PEP-II operation are 30 {mu}Sv/y from skyshine radiation and 2 {mu}Sv/y from air releases. The population doses due to radioactive air emission are 0.5 person-mSv from SSRL and 12 person-mSv from PEP-II. Because of the stronger decrease of skyshine dose as the distance increases, the population dose from skyshine radiation are smaller than that from air release. The third environmental impact, tritium activity produced in the groundwater, was also demonstrated to be acceptable from both the well water measurements and the FLUKA calculations for the worst case of the SLC high-power dump.« less
NASA Astrophysics Data System (ADS)
Pagnoni, Gianluca; Tinti, Stefano
2017-04-01
The city of Augusta is located in the southern part of the eastern coast of Sicily. Italian tsunami catalogue and paleo-tsunami surveys indicate that at least 7 events of tsunami affected the bay of Augusta in the last 4,000 years, two of which are associated with earthquakes (1169 and 1693) that destroyed the city. For these reasons Augusta has been chosen in the project ASTARTE as a test site for the study of issues related to tsunami hazard and risk. In last two years we studied hazard through the approach of the worst-case credible scenario and carried out vulnerability and damage analysis for buildings. In this work, we integrate that research, and estimate the damage to people and the economic loss of buildings due to structural damage. As regards inundation, we assume both uniform inundation levels (bath-tub hypothesis) and inundation data resulting from the worst-case scenario elaborated for the area by Armigliato et al. (2015). Human damage is calculated in three steps using the method introduced by Pagnoni et al. (2016) following the work by Terrier et al. (2012) and by Koshimura et al. (2009). First, we use census data to estimate the number of people present in each residential building affected by inundation; second, based on water column depth and building type, we evaluate the level of damage to people; third, we provide an estimate of fatalities. The economic loss is computed for two types of buildings (residential and trade-industrial) by using data on inundation and data from the real estate market. This study was funded by the EU Project ASTARTE - "Assessment, STrategy And Risk Reduction for Tsunamis in Europe", Grant 603839, 7th FP (ENV.2013.6.4-3)
Feig, Chiara; Cheung, Kei Long; Hiligsmann, Mickaël; Evers, Silvia M A A; Simon, Judit; Mayer, Susanne
2018-04-01
Although Health Technology Assessment (HTA) is increasingly used to support evidence-based decision-making in health care, several barriers and facilitators for the use of HTA have been identified. This best-worst scaling (BWS) study aims to assess the relative importance of selected barriers and facilitators of the uptake of HTA studies in Austria. A BWS object case survey was conducted among 37 experts in Austria to assess the relative importance of HTA barriers and facilitators. Hierarchical Bayes estimation was applied, with the best-worst count analysis as sensitivity analysis. Subgroup analyses were also performed on professional role and HTA experience. The most important barriers were 'lack of transparency in the decision-making process', 'fragmentation', 'absence of appropriate incentives', 'no explicit framework for decision-making process', and 'insufficient legal support'. The most important facilitators were 'transparency in the decision-making process', 'availability of relevant HTA research for policy makers', 'availability of explicit framework for decision-making process', 'sufficient legal support', and 'appropriate incentives'. This study suggests that HTA barriers and facilitators related to the context of decision makers, especially 'policy characteristics' and 'organization and resources' are the most important in Austria. A transparent and participatory decision-making process could improve the adoption of HTA evidence.
Cholera epidemic among Rwandan refugees: experience of ICDDR,B in Goma, Zaire.
Siddique, A K
1994-01-01
In July 1994, one of the worst cholera epidemics broke out among the nearly a million Rwandan refugees in Goma, eastern Zaire. The United Nations High Commission for Refugees estimated that nearly 12,000 people died during the epidemic. The International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) sent an eight-member medical team to Goma headed by Dr AK Siddique, a senior scientist of the Center and head of the Epidemic Control Preparedness Program, Dacca, Bangladesh. During their two-week stay, the team, in collaboration with UNICEF and the Ministry of Health, Zaire, conducted epidemiological assessment, operated a temporary treatment center and provided technical advice on case management of cholera and shigellosis to other health workers. The team also set up a microbiology laboratory in Goma to identify the pathogens responsible for the epidemic and their drug sensitivity patterns. The team visited a number of temporary treatment facilities in two of the five camp sites and provided technical advice to the health-care providers. They also visited treatment facilities in Goma city, where an estimated 200,000 refugees were affected by the epidemic. Deaths from cholera even in the treatment centers were much higher than expected. The overall case-fatality rate in the treatment centers was nearly 15%. Laboratory investigations showed that the initial epidemic was indeed caused by Vibrio cholerae strains resistant to tetracycline and doxycycline. By the first week of August, the number of cholera cases was declining, but the number of dysentery cases was increasing rapidly. Predominantly Shigella dysenteriae type 1 was responsible, which was resistant to most drugs used for treating shigellosis, except mecillinam. Inappropriate rehydration therapy and inadequate experience of health workers failed to prevent deaths. The team took over the operation of temporary treatment center at Katindo in Goma city with one of the highest case-fatality rates (14.5%) and could reduce the fatality rate to less than 1%.
Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors
NASA Astrophysics Data System (ADS)
Marti, Alejandro; Folch, Arnau
2018-03-01
Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally efficient online dispersal models.
Huang, Susan S; Placzek, Hilary; Livingston, James; Ma, Allen; Onufrak, Fallon; Lankiewicz, Julie; Kleinman, Ken; Bratzler, Dale; Olsen, Margaret A; Lyles, Rosie; Khan, Yosef; Wright, Paula; Yokoe, Deborah S; Fraser, Victoria J; Weinstein, Robert A; Stevenson, Kurt; Hooper, David; Vostok, Johanna; Datta, Rupak; Nsa, Wato; Platt, Richard
2011-08-01
To evaluate whether longitudinal insurer claims data allow reliable identification of elevated hospital surgical site infection (SSI) rates. We conducted a retrospective cohort study of Medicare beneficiaries who underwent coronary artery bypass grafting (CABG) in US hospitals performing at least 80 procedures in 2005. Hospitals were assigned to deciles by using case mix-adjusted probabilities of having an SSI-related inpatient or outpatient claim code within 60 days of surgery. We then reviewed medical records of randomly selected patients to assess whether chart-confirmed SSI risk was higher in hospitals in the worst deciles compared with the best deciles. Fee-for-service Medicare beneficiaries who underwent CABG in these hospitals in 2005. We evaluated 114,673 patients who underwent CABG in 671 hospitals. In the best decile, 7.8% (958/12,307) of patients had an SSI-related code, compared with 24.8% (2,747/11,068) in the worst decile ([Formula: see text]). Medical record review confirmed SSI in 40% (388/980) of those with SSI-related codes. In the best decile, the chart-confirmed annual SSI rate was 3.2%, compared with 9.4% in the worst decile, with an adjusted odds ratio of SSI of 2.7 (confidence interval, 2.2-3.3; [Formula: see text]) for CABG performed in a worst-decile hospital compared with a best-decile hospital. Claims data can identify groups of hospitals with unusually high or low post-CABG SSI rates. Assessment of claims is more reproducible and efficient than current surveillance methods. This example of secondary use of routinely recorded electronic health information to assess quality of care can identify hospitals that may benefit from prevention programs.
Development of an Infrared Lamp Array for the Smap Spacecraft Thermal Balance Test
NASA Technical Reports Server (NTRS)
Miller, Jennifer R.; Emis, Nickolas; Forgette, Daniel
2015-01-01
NASA launched the SMAP observatory in January 2015 aboard a Delta II into a sun-synchronous orbit around Earth. The science payload of a radar and a radiometer utilizes a shared rotating six-meter antenna to provide a global map of the Earth's soil moisture content and its freeze/thaw state on a global, high-resolution scale in this three-year mission. An observatory-level thermal balance test conducted in May/June 2014 validated the thermal design and demonstrated launch readiness as part of the planned environmental test campaign. An infrared lamp array was designed and used in the thermal balance test to replicate solar heating on the solar array and sunlit side of the spacecraft that would normally be seen in orbit. The design, implementation, and operation of an infrared lamp array used for this nineteen-day system thermal test are described in this paper. Instrumental to the smooth operation of this lamp array was a characterization test performed in the same chamber two months prior to the observatory test to provide insight into its array operation and flux uniformity. This knowledge was used to identify the lamp array power settings that would provide the worst case predicted on-orbit fluxes during eclipse, cold, and hot cases. It also showed the lamp array variation when adjustments in flux were needed. Calorimeters calibrated prior to testing determined a relationship between calorimeter temperature and lamp array flux. This allowed the team to adjust the lamp output for the desired absorbed flux on the solar array. Flux levels were within 10% of the desired value at the center of the solar array with an ability to maintain these levels within 5% during steady state cases. All tests demonstrated the infrared lamp array functionality and furthered lamp array understanding for modeling purposes. This method contributed to a high-fidelity environmental simulation, which was required to replicate the extreme on-orbit thermal environments.
Mongoose: Creation of a Rad-Hard MIPS R3000
NASA Technical Reports Server (NTRS)
Lincoln, Dan; Smith, Brian
1993-01-01
This paper describes the development of a 32 Bit, full MIPS R3000 code-compatible Rad-Hard CPU, code named Mongoose. Mongoose progressed from contract award, through the design cycle, to operational silicon in 12 months to meet a space mission for NASA. The goal was the creation of a fully static device capable of operation to the maximum Mil-883 derated speed, worst-case post-rad exposure with full operational integrity. This included consideration of features for functional enhancements relating to mission compatibility and removal of commercial practices not supported by Rad-Hard technology. 'Mongoose' developed from an evolution of LSI Logic's MIPS-I embedded processor, LR33000, code named Cobra, to its Rad-Hard 'equivalent', Mongoose. The term 'equivalent' is used to infer that the core of the processor is functionally identical, allowing the same use and optimizations of the MIPS-I Instruction Set software tool suite for compilation, software program trace, etc. This activity was started in September of 1991 under a contract from NASA-Goddard Space Flight Center (GSFC)-Flight Data Systems. The approach affected a teaming of NASA-GSFC for program development, LSI Logic for system and ASIC design coupled with the Rad-Hard process technology, and Harris (GASD) for Rad-Hard microprocessor design expertise. The program culminated with the generation of Rad-Hard Mongoose prototypes one year later.
In vivo RF powering for advanced biological research.
Zimmerman, Mark D; Chaimanonart, Nattapon; Young, Darrin J
2006-01-01
An optimized remote powering architecture with a miniature and implantable RF power converter for an untethered small laboratory animal inside a cage is proposed. The proposed implantable device exhibits dimensions less than 6 mmx6 mmx1 mm, and a mass of 100 mg including a medical-grade silicon coating. The external system consists of a Class-E power amplifier driving a tuned 15 cmx25 cm external coil placed underneath the cage. The implant device is located in the animal's abdomen in a plane parallel to the external coil and utilizes inductive coupling to receive power from the external system. A half-wave rectifier rectifies the received AC voltage and passes the resulting DC current to a 2.5 kOmega resistor, which represents the loading of an implantable microsystem. An optimal operating point with respect to operating frequency and number of turns in each coil inductor was determined by analyzing the system efficiency. The determined optimal operating condition is based on a 4-turn external coil and a 20-turn internal coil operating at 4 MHz. With the Class-E amplifier consuming a constant power of 25 W, this operating condition is sufficient to supply a desired 3.2 V with 1.3 mA to the load over a cage size of 10 cmx20 cm with an animal tilting angle of up to 60 degrees, which is the worst case considered for the prototype design. A voltage regulator can be designed to regulate the received DC power to a stable supply for the bio-implant microsystem.
Equilibrium temperature in a clump of bacteria heated in fluid.
Davey, K R
1990-01-01
A theoretical model was developed and used to estimate quantitatively the "worst case", i.e., the longest, time to reach equilibrium temperature in the center of a clump of bacteria heated in fluid. For clumps with 10 to 10(6) cells heated in vapor, such as dry and moist air, and liquid fluids such as purees and juices, predictions show that temperature equilibrium will occur with sterilization temperatures up to 130 degrees C in under 0.02 s. Model development highlighted that the controlling influence on time for heating up the clump is the surface convection thermal resistance and that the internal conduction resistance of the clump mass is negligible by comparison. The time for a clump to reach equilibrium sterilization temperature was therefore decreased with relative turbulence (velocity) of the heating fluid, such as occurs in many process operations. These results confirm widely held suppositions that the heat-up time of bacteria in vapor or liquid is not significant with usual sterilization times. PMID:2306095
Li-ion cells for terrestrial robots
NASA Technical Reports Server (NTRS)
Chin, Keith B.; Smart, M. C.; Narayanan, S. R.; Ratnakumar, B. V.; Whitcanack, L. D.; Davies, E. D.; Surampudi, S.; Raman, N. S.
2003-01-01
SAFT prismatic wound 5 Ahr MP series cells were evaluated for potential application in a lithium ion battery designed for Tactical Mobile Robots (TMR). In order to satisfy battery design requirements, a 10 Ahr battery containing two parallel 8-cell strings was proposed. The proposed battery has a weight and volume of approximately 3.2kg and 1.6 liters, respectively. Cell qualification procedures include initial characterization, followed by charge/discharge cycling at 100% DOD with intermittent EIS measurements at various state of charge. Certain cells were also subjected to extreme operational temperatures for worst-case analysis. Excellent specific energy (>130 Whr/kg) was obtained with initial characterization cycles. Even at abusive thermal conditions, the cell capacity fade was less than Ahr after 300 cycles. Rate characterization showed good cell discharge behavior with minimal decrease in capacity. At various state of charge, impedance measurements suggest that the cathode play a more significant role in capacity. At various state of charge impedance measurements suggest that the cathode play a more significant role in capacity fade than the anode.
Characterizing Space Environments with Long-Term Space Plasma Archive Resources
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Miller, J. Scott; Diekmann, Anne M.; Parker, Linda N.
2009-01-01
A significant scientific benefit of establishing and maintaining long-term space plasma data archives is the ready access the archives afford to resources required for characterizing spacecraft design environments. Space systems must be capable of operating in the mean environments driven by climatology as well as the extremes that occur during individual space weather events. Long- term time series are necessary to obtain quantitative information on environment variability and extremes that characterize the mean and worst case environments that may be encountered during a mission. In addition, analysis of large data sets are important to scientific studies of flux limiting processes that provide a basis for establishing upper limits to environment specifications used in radiation or charging analyses. We present applications using data from existing archives and highlight their contributions to space environment models developed at Marshall Space Flight Center including the Chandra Radiation Model, ionospheric plasma variability models, and plasma models of the L2 space environment.
Cooper, Rachel
2014-02-01
In the 1940s and 1950s thousands of lobotomies were performed on people with mental disorders. These operations were known to be dangerous, but thought to offer great hope. Nowadays, the lobotomies of the 1940s and 1950s are widely condemned. The consensus is that the practitioners who employed them were, at best, misguided enthusiasts, or, at worst, evil. In this paper I employ standard decision theory to understand and assess shifts in the evaluation of lobotomy. Textbooks of medical decision making generally recommend that decisions under risk are made so as to maximise expected utility (MEU) I show that using this procedure suggests that the 1940s and 1950s practice of psychosurgery was justifiable. In making sense of this finding we have a choice: Either we can accept that psychosurgery was justified, in which case condemnation of the lobotomists is misplaced. Or, we can conclude that the use of formal decision procedures, such as MEU, is problematic.
Assessment of microwave-based clinical waste decontamination unit.
Hoffman, P N; Hanley, M J
1994-12-01
A clinical waste decontamination unit that used microwave-generated heat was assessed for operator safety and efficacy. Tests with loads artificially contaminated with aerosol-forming particles showed that no particles were detected outside the machine provided the seals and covers were correctly seated. Thermometric measurement of a self-generated steam decontamination cycle was used to determine the parameters needed to ensure heat disinfection of the waste reception hopper, prior to entry for maintenance or repair. Bacterial and thermometric test pieces were passed through the machine within a full load of clinical waste. These test pieces, designed to represent a worst case situation, were enclosed in aluminium foil to shield them from direct microwave energy. None of the 100 bacterial test pieces yielded growth on culture and all 100 thermal test pieces achieved temperatures in excess of 99 degrees C during their passage through the decontamination unit. It was concluded that this method may be used to render safe the bulk of of ward-generated clinical waste.
NASA Technical Reports Server (NTRS)
Campbell, Colin; Cox, Marlon; Meginnis, Carly; Falconi, Eric
2017-01-01
The Variable Oxygen Regulator (VOR), a stepper actuated two-stage mechanical regulator, is being developed for the purpose of serving as the Primary Oxygen Regulator (POR) and Secondary Oxygen Regulator (SOR) within the Advanced EMU PLSS, now referred to as the xEMU and xPLSS. Three prototype designs have been fabricated and tested as part of this development. Building upon the lessons learned from the 35 years of Shuttle/ISS EMU Program operation including the fleet-wide EMU Secondary Oxygen Pack (SOP) contamination failure that occurred in 2000, the VOR is being analyzed, designed, and tested for oxygen compatibility with controlled Non-Volatile Residue (NVR) and a representative worst-case hydro-carbon system contamination event (>100mg/sq ft dodecane). This paper discusses the steps taken in testing of VOR 2.0 with for oxygen compatibility and then discusses follow-on design changes implemented in the VOR 3.0 (3rd prototype) as a result.
A design procedure for the phase-controlled parallel-loaded resonant inverter
NASA Technical Reports Server (NTRS)
King, Roger J.
1989-01-01
High-frequency-link power conversion and distribution based on a resonant inverter (RI) has been recently proposed. The design of several topologies is reviewed, and a simple approximate design procedure is developed for the phase-controlled parallel-loaded RI. This design procedure seeks to ensure the benefits of resonant conversion and is verified by data from a laboratory 2.5 kVA, 20-kHz converter. A simple phasor analysis is introduced as a useful approximation for design purposes. The load is considered to be a linear impedance (or an ac current sink). The design procedure is verified using a 2.5-kVA 20-kHz RI. Also obtained are predictable worst-case ratings for each component of the resonant tank circuit and the inverter switches. For a given load VA requirement, below-resonance operation is found to result in a significantly lower tank VA requirement. Under transient conditions such as load short-circuit, a reversal of the expected commutation sequence is possible.
Case series: Two cases of eyeball tattoos with short-term complications.
Duarte, Gonzalo; Cheja, Rashel; Pachón, Diana; Ramírez, Carolina; Arellanes, Lourdes
2017-04-01
To report two cases of eyeball tattoos with short-term post procedural complications. Case 1 is a 26-year-old Mexican man that developed orbital cellulitis and posterior scleritis 2 h after an eyeball tattoo. Patient responded satisfactorily to systemic antibiotic and corticosteroid treatment. Case 2 is a 17-year-old Mexican man that developed two sub-episcleral nodules in the ink injection sites immediately after the procedure. Eyeball tattoos are performed by non-ophthalmic trained personnel. There are a substantial number of short-term risks associated with this procedure. Long-term effects on the eyes and vision are still unknown, but in a worst case scenario could include loss of vision or permanent damage to the eyes.
Keyboard before Head Tracking Depresses User Success in Remote Camera Control
NASA Astrophysics Data System (ADS)
Zhu, Dingyun; Gedeon, Tom; Taylor, Ken
In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.
Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans
2011-11-01
We describe an empirical model for exposure to respirable crystalline silica (RCS) to create a quantitative job-exposure matrix (JEM) for community-based studies. Personal measurements of exposure to RCS from Europe and Canada were obtained for exposure modelling. A mixed-effects model was elaborated, with region/country and job titles as random effect terms. The fixed effect terms included year of measurement, measurement strategy (representative or worst-case), sampling duration (minutes) and a priori exposure intensity rating for each job from an independently developed JEM (none, low, high). 23,640 personal RCS exposure measurements, covering a time period from 1976 to 2009, were available for modelling. The model indicated an overall downward time trend in RCS exposure levels of -6% per year. Exposure levels were higher in the UK and Canada, and lower in Northern Europe and Germany. Worst-case sampling was associated with higher reported exposure levels and an increase in sampling duration was associated with lower reported exposure levels. Highest predicted RCS exposure levels in the reference year (1998) were for chimney bricklayers (geometric mean 0.11 mg m(-3)), monument carvers and other stone cutters and carvers (0.10 mg m(-3)). The resulting model enables us to predict time-, job-, and region/country-specific exposure levels of RCS. These predictions will be used in the SYNERGY study, an ongoing pooled multinational community-based case-control study on lung cancer.
Johnson, Miriam J; Kanaan, Mona; Richardson, Gerry; Nabb, Samantha; Torgerson, David; English, Anne; Barton, Rachael; Booth, Sara
2015-09-07
About 90 % of patients with intra-thoracic malignancy experience breathlessness. Breathing training is helpful, but it is unknown whether repeated sessions are needed. The present study aims to test whether three sessions are better than one for breathlessness in this population. This is a multi-centre randomised controlled non-blinded parallel arm trial. Participants were allocated to three sessions or single (1:2 ratio) using central computer-generated block randomisation by an independent Trials Unit and stratified for centre. The setting was respiratory, oncology or palliative care clinics at eight UK centres. Inclusion criteria were people with intrathoracic cancer and refractory breathlessness, expected prognosis ≥3 months, and no prior experience of breathing training. The trial intervention was a complex breathlessness intervention (breathing training, anxiety management, relaxation, pacing, and prioritisation) delivered over three hour-long sessions at weekly intervals, or during a single hour-long session. The main primary outcome was worst breathlessness over the previous 24 hours ('worst'), by numerical rating scale (0 = none; 10 = worst imaginable). Our primary analysis was area under the curve (AUC) 'worst' from baseline to 4 weeks. All analyses were by intention to treat. Between April 2011 and October 2013, 156 consenting participants were randomised (52 three; 104 single). Overall, the 'worst' score reduced from 6.81 (SD, 1.89) to 5.84 (2.39). Primary analysis [n = 124 (79 %)], showed no between-arm difference in the AUC: three sessions 22.86 (7.12) vs single session 22.58 (7.10); P value = 0.83); mean difference 0.2, 95 % CIs (-2.31 to 2.97). Complete case analysis showed a non-significant reduction in QALYs with three sessions (mean difference -0.006, 95 % CIs -0.018 to 0.006). Sensitivity analyses found similar results. The probability of the single session being cost-effective (threshold value of £20,000 per QALY) was over 80 %. There was no evidence that three sessions conferred additional benefits, including cost-effectiveness, over one. A single session of breathing training seems appropriate and minimises patient burden. Registry: ISRCTN; ISRCTN49387307; http://www.isrctn.com/ISRCTN49387307 ; registration date: 25/01/2011.
Optimizing Processes to Minimize Risk
NASA Technical Reports Server (NTRS)
Loyd, David
2017-01-01
NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.
Distributed control at Love canal
DOE Office of Scientific and Technical Information (OSTI.GOV)
McPherson, G.; Rider, G.J.; Sadowski, B.
Love Canal is known worldwide as the site of one of the worst non-nuclear environmental disasters in modern history. For 12 years, a Niagara Falls, New York chemical company used the canal bed as a chemical dump. This article discusses the computerized control of equipment used to remove the toxic materials from the ground under Love Canal, and how the minimization of maintenance is reducing maintenance costs and increasing operator safety.
The Worst Disaster: The Decisive Point and the Fall of Singapore
2007-11-06
British territory of Malaya includes Singapore. In 1957, Malaysia became an independent state. In 1965, ore seceded from Malaysia . A.J Kennedy, A...repulsed? Per the leisurely pace of Singapore’s defense planning to date, it would certainly have been uncharacteristic of the entire Singapore... leisurely pace, Britain’s pre-WWII operational commanders were unable to compensate for a newly identified decisive point. Conversely, today’s
Katz, Alison Rosamund
2013-01-01
The promotion of noncommunicable diseases (NCDs) as a global health priority started a decade ago and culminated in a 2011 United Nations high-level meeting. The focus is on four diseases (cardiovascular and chronic respiratory diseases, cancers, and diabetes) and four risk factors (tobacco use, unhealthy diet, physical inactivity, and harmful alcohol use). The message is that disease and death are now globalized, risk factors are overwhelmingly behavioral, and premature NCD deaths, especially in low- and middle-income countries, are the concern. The NCD agenda is promoted by United Nations agencies, foundations, institutes, and organizations in a style that suggests a market opportunity. This "hard sell" of NCDs contrasts with the sober style of the World Health Organization's Global Burden of Disease report, which presents a more nuanced picture of mortality and morbidity and different implications for global health priorities. This report indicates continuing high levels of premature death from infectious disease and from maternal, perinatal, and nutritional conditions in low-income countries and large health inequalities. Comparison of the reports offers an illustration of the World Health Organization at its worst, operating under the influence of the private sector, and at its best, operating according to its constitutional mandate.
USDA-ARS?s Scientific Manuscript database
Past climate observations have indicated a rapid increase in global atmospheric CO2 concentration during late 20th century (13 ppm/decade), and models project further rise throughout the 21st century (24 ppm/decade and 69 ppm/decade in the best and worst case scenario, respectively). We modified SWA...
2008-12-01
full glare of media and public scrutiny, they are expected to perform flawlessly like a goalie in hockey or soccer, or a conversion kicker in...among all levels of government, not a plan that is pulled off the shelf only during worst- case disasters. The lifecycle of disasters entails a
Sports Competition--Integrated or Segregated? Which Is Better for Your Child?
ERIC Educational Resources Information Center
Grosse, Susan J.
2008-01-01
Selecting competitive sports opportunities for a child is a challenging process. Parents have to make the right choices so that their young athletes will have many years of healthy, happy, active experiences. If parents make the wrong choices, their son or daughter will have, at the very least, a few unhappy hours, and worst-case scenario, could…
Responding to Disaster with a Service Learning Project for Honors Students
ERIC Educational Resources Information Center
Yoder, Stephen A.
2013-01-01
On Thursday, April 27, 2011, one of the worst natural disasters in the history of Alabama struck in the form of ferocious tornadoes touching down in various parts of the state. The dollar amount of the property damage was in the billions. Lives were lost and thousands of survivors' lives were seriously and in many cases forever disrupted. A few…
Modelling the Growth of Swine Flu
ERIC Educational Resources Information Center
Thomson, Ian
2010-01-01
The spread of swine flu has been a cause of great concern globally. With no vaccine developed as yet, (at time of writing in July 2009) and given the fact that modern-day humans can travel speedily across the world, there are fears that this disease may spread out of control. The worst-case scenario would be one of unfettered exponential growth.…
Between the Under-Labourer and the Master-Builder: Observations on Bunge's Method
ERIC Educational Resources Information Center
Agassi, Joseph
2012-01-01
Mario Bunge has repeatedly discussed contributions to philosophy and to science that are worthless at best and dangerous at worst, especially cases of pseudo-science. He clearly gives his reason in his latest essay on this matter: "The fact that science can be faked to the point of deceiving science lovers suggests the need for a rigorous sifting…
Technology, design, simulation, and evaluation for SEP-hardened circuits
NASA Technical Reports Server (NTRS)
Adams, J. R.; Allred, D.; Barry, M.; Rudeck, P.; Woodruff, R.; Hoekstra, J.; Gardner, H.
1991-01-01
This paper describes the technology, design, simulation, and evaluation for improvement of the Single Event Phenomena (SEP) hardness of gate-array and SRAM cells. Through the use of design and processing techniques, it is possible to achieve an SEP error rate less than 1.0 x 10(exp -10) errors/bit-day for a 9O percent worst-case geosynchronous orbit environment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... rate, type of control devices, process parameters (e.g., maximum heat input), and non-process... control systems (if applicable) and explain why the conditions are worst-case. (c) Number of test runs... located at the outlet of the control device and prior to any releases to the atmosphere. (e) Collection of...
Code of Federal Regulations, 2011 CFR
2011-07-01
... rate, type of control devices, process parameters (e.g., maximum heat input), and non-process... control systems (if applicable) and explain why the conditions are worst-case. (c) Number of test runs... located at the outlet of the control device and prior to any releases to the atmosphere. (e) Collection of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... rate, type of control devices, process parameters (e.g., maximum heat input), and non-process... control systems (if applicable) and explain why the conditions are worst-case. (c) Number of test runs... located at the outlet of the control device and prior to any releases to the atmosphere. (e) Collection of...
The conservation of diversity in forest trees
F. Thomas Ledig
1988-01-01
Deforestation, pollution, and climatic change threaten forest diversity all over the world. And because forests are the habitats for diverse organisms, the threat is extended to all the flora and fauna associated with forests, not only forest trees. In a worst case scenario, if the tropical forest in Latin America was reduced to the areas now set aside in parks and...
Management adaptation to fires in the wildland-urban risk areas in Spain
Gema Herrero-Corral
2013-01-01
Forest fires not only cause damage to ecosystems but also result in major socio-economic losses and in the worst cases loss of human life. Specifically, the incidence of fires in the overlapping areas between building structures and forest vegetation (wildland-urban interface, WUI) generates highly-complex emergencies due to the presence of people and goods....
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 1.7 to 2.6 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 7.2 to 24.2 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.8 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.1 to 11.6 ft. The worst-case abutment scour occurred at the incipient-overtopping discharge, which was 50 cfs lower than the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scouredstreambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particlesize distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Dynamic safety assessment of natural gas stations using Bayesian network.
Zarei, Esmaeil; Azadeh, Ali; Khakzad, Nima; Aliabadi, Mostafa Mirzaei; Mohammadfam, Iraj
2017-01-05
Pipelines are one of the most popular and effective ways of transporting hazardous materials, especially natural gas. However, the rapid development of gas pipelines and stations in urban areas has introduced a serious threat to public safety and assets. Although different methods have been developed for risk analysis of gas transportation systems, a comprehensive methodology for risk analysis is still lacking, especially in natural gas stations. The present work is aimed at developing a dynamic and comprehensive quantitative risk analysis (DCQRA) approach for accident scenario and risk modeling of natural gas stations. In this approach, a FMEA is used for hazard analysis while a Bow-tie diagram and Bayesian network are employed to model the worst-case accident scenario and to assess the risks. The results have indicated that the failure of the regulator system was the worst-case accident scenario with the human error as the most contributing factor. Thus, in risk management plan of natural gas stations, priority should be given to the most probable root events and main contribution factors, which have identified in the present study, in order to reduce the occurrence probability of the accident scenarios and thus alleviate the risks. Copyright © 2016 Elsevier B.V. All rights reserved.
Olson, Scott A.; Degnan, James R.
1997-01-01
Contraction scour computed for all modelled flows was 0.0 ft. Computed left abutment scour ranged from 9.4 to 10.2 ft. with the worst-case scour occurring at the 500-year discharge. Computed right abutment scour ranged from 2.7 to 5.7 ft. with the worst-case scour occurring at the incipient roadway-overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Medalie, Laura
1996-01-01
Contraction scour for all modelled flows ranged from 0.3 to 0.5 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.0 to 8.0 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Boehmler, Erick M.; Song, Donald L.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 1.4 feet. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 2.3 to 8.9 feet. The worst-case abutment scour occurred at the 100-year discharge at the right abutment. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Flynn, Robert H.; Severance, Timothy
1997-01-01
Contraction scour for all modelled flows ranged from 0.7 to 1.3 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 9.1 to 12.5 ft. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.