Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien
2016-06-01
A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
An SEU resistant 256K SOI SRAM
NASA Astrophysics Data System (ADS)
Hite, L. R.; Lu, H.; Houston, T. W.; Hurta, D. S.; Bailey, W. E.
1992-12-01
A novel SEU (single event upset) resistant SRAM (static random access memory) cell has been implemented in a 256K SOI (silicon on insulator) SRAM that has attractive performance characteristics over the military temperature range of -55 to +125 C. These include worst-case access time of 40 ns with an active power of only 150 mW at 25 MHz, and a worst-case minimum WRITE pulse width of 20 ns. Measured SEU performance gives an Adams 10 percent worst-case error rate of 3.4 x 10 exp -11 errors/bit-day using the CRUP code with a conservative first-upset LET threshold. Modeling does show that higher bipolar gain than that measured on a sample from the SRAM lot would produce a lower error rate. Measurements show the worst-case supply voltage for SEU to be 5.5 V. Analysis has shown this to be primarily caused by the drain voltage dependence of the beta of the SOI parasitic bipolar transistor. Based on this, SEU experiments with SOI devices should include measurements as a function of supply voltage, rather than the traditional 4.5 V, to determine the worst-case condition.
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.
Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty
1997-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
Robust Flutter Margin Analysis that Incorporates Flight Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Martin J.
1998-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
NASA Technical Reports Server (NTRS)
Simon, M. K.; Polydoros, A.
1981-01-01
This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.
Correct consideration of the index of refraction using blackbody radiation.
Hartmann, Jurgen
2006-09-04
The correct consideration of the index of refraction when using blackbody radiators as standard sources for optical radiation is derived and discussed. It is shown that simply using the index of refraction of air at laboratory conditions is not sufficient. A combination of the index of refraction of the media used inside the blackbody radiator and for the optical path between blackbody and detector has to be used instead. A worst case approximation for the introduced error when neglecting these effects is presented, showing that the error is below 0.1 % for wavelengths above 200 nm. Nevertheless, for the determination of the spectral radiance for the purpose of radiation temperature measurements the correct consideration of the refractive index is mandatory. The worst case estimation reveals that the introduced error in temperature at a blackbody temperature of 3000 degrees C can be as high as 400 mk at a wavelength of 650 nm and even higher at longer wavelengths.
Worst case estimation of homology design by convex analysis
NASA Technical Reports Server (NTRS)
Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.
1998-01-01
The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.
Technology, design, simulation, and evaluation for SEP-hardened circuits
NASA Technical Reports Server (NTRS)
Adams, J. R.; Allred, D.; Barry, M.; Rudeck, P.; Woodruff, R.; Hoekstra, J.; Gardner, H.
1991-01-01
This paper describes the technology, design, simulation, and evaluation for improvement of the Single Event Phenomena (SEP) hardness of gate-array and SRAM cells. Through the use of design and processing techniques, it is possible to achieve an SEP error rate less than 1.0 x 10(exp -10) errors/bit-day for a 9O percent worst-case geosynchronous orbit environment.
Improved astigmatic focus error detection method
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.
1992-01-01
All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.
TID and SEE Response of an Advanced Samsung 4G NAND Flash Memory
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.; Friendlich, M.; Howard, J. W.; Berg, M. D.; Kim, H. S.; Irwin, T. L.; LaBel, K. A.
2007-01-01
Initial total ionizing dose (TID) and single event heavy ion test results are presented for an unhardened commercial flash memory, fabricated with 63 nm technology. Results are that the parts survive to a TID of nearly 200 krad (SiO2), with a tractable soft error rate of about 10(exp -l2) errors/bit-day, for the Adams Ten Percent Worst Case Environment.
NASA Astrophysics Data System (ADS)
Van Zandt, James R.
2012-05-01
Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.
The contribution of low-energy protons to the total on-orbit SEU rate
Dodds, Nathaniel Anson; Martinez, Marino J.; Dodd, Paul E.; ...
2015-11-10
Low- and high-energy proton experimental data and error rate predictions are presented for many bulk Si and SOI circuits from the 20-90 nm technology nodes to quantify how much low-energy protons (LEPs) can contribute to the total on-orbit single-event upset (SEU) rate. Every effort was made to predict LEP error rates that are conservatively high; even secondary protons generated in the spacecraft shielding have been included in the analysis. Across all the environments and circuits investigated, and when operating within 10% of the nominal operating voltage, LEPs were found to increase the total SEU rate to up to 4.3 timesmore » as high as it would have been in the absence of LEPs. Therefore, the best approach to account for LEP effects may be to calculate the total error rate from high-energy protons and heavy ions, and then multiply it by a safety margin of 5. If that error rate can be tolerated then our findings suggest that it is justified to waive LEP tests in certain situations. Trends were observed in the LEP angular responses of the circuits tested. As a result, grazing angles were the worst case for the SOI circuits, whereas the worst-case angle was at or near normal incidence for the bulk circuits.« less
Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas
2003-07-01
Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less
Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops
NASA Astrophysics Data System (ADS)
Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.
2017-08-01
The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.
Worst error performance of continuous Kalman filters. [for deep space navigation and maneuvers
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
The worst error performance of estimation filters is investigated for continuous systems in this paper. The pathological performance study, without assuming any dynamical model such as Markov processes for perturbations, except for its bounded amplitude, will give practical and dependable criteria in establishing the navigation and maneuver strategy in deep space missions.
Synthesis of robust nonlinear autopilots using differential game theory
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1991-01-01
A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.
Effect of the mandible on mouthguard measurements of head kinematics.
Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B
2016-06-14
Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
The reduction of a ""safety catastrophic'' potential hazard: A case history
NASA Technical Reports Server (NTRS)
Jones, J. P.
1971-01-01
A worst case analysis is reported on the safety of time watch movements for triggering explosive packages on the lunar surface in an experiment to investigate physical lunar structural characteristics through induced seismic energy waves. Considered are the combined effects of low pressure, low temperature, lunar gravity, gear train error, and position. Control measures constitute a seal control cavity and design requirements to prevent overbanking in the mainspring torque curve. Thus, the potential hazard is reduced to safety negligible.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1985-01-01
For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.
Worst case analysis: Earth sensor assembly for the tropical rainfall measuring mission observatory
NASA Technical Reports Server (NTRS)
Conley, Michael P.
1993-01-01
This worst case analysis verifies that the TRMMESA electronic design is capable of maintaining performance requirements when subjected to worst case circuit conditions. The TRMMESA design is a proven heritage design and capable of withstanding the most worst case and adverse of circuit conditions. Changes made to the baseline DMSP design are relatively minor and do not adversely effect the worst case analysis of the TRMMESA electrical design.
Reduced backscattering cross section (Sigma degree) data from the Skylab S-193 radar altimeter
NASA Technical Reports Server (NTRS)
Brown, G. S.
1975-01-01
Backscattering cross section per unit scattering area data, reduced from measurements made by the Skylab S-193 radar altimeter over the ocean surface are presented. Descriptions of the altimeter are given where applicable to the measurement process. Analytical solutions are obtained for the flat surface impulse response for the case of a nonsymmetrical antenna pattern. Formulations are developed for converting altimeter AGC outputs into values for the backscattering cross section. Reduced data are presented for Missions SL-2, 3 and 4 for all modes of the altimeter where sufficient calibration existed. The problem of interpreting land scatter data is also discussed. Finally, a comprehensive error analysis of the measurement is presented and worst case random and bias errors are estimated.
Comparison of in-situ delay monitors for use in Adaptive Voltage Scaling
NASA Astrophysics Data System (ADS)
Pour Aryan, N.; Heiß, L.; Schmitt-Landsiedel, D.; Georgakos, G.; Wirnshofer, M.
2012-09-01
In Adaptive Voltage Scaling (AVS) the supply voltage of digital circuits is tuned according to the circuit's actual operating condition, which enables dynamic compensation to PVTA variations. By exploiting the excessive safety margins added in state-of-the-art worst-case designs considerable power saving is achieved. In our approach, the operating condition of the circuit is monitored by in-situ delay monitors. This paper presents different designs to implement the in-situ delay monitors capable of detecting late but still non-erroneous transitions, called Pre-Errors. The developed Pre-Error monitors are integrated in a 16 bit multiplier test circuit and the resulting Pre-Error AVS system is modeled by a Markov chain in order to determine the power saving potential of each Pre-Error detection approach.
NASA Technical Reports Server (NTRS)
Hill, Eric v. K.; Walker, James L., II; Rowell, Ginger H.
1995-01-01
Acoustic emission (AE) data were taken during hydroproof for three sets of ASTM standard 5.75 inch diameter filament wound graphite/epoxy bottles. All three sets of bottles had the same design and were wound from the same graphite fiber; the only difference was in the epoxies used. Two of the epoxies had similar mechanical properties, and because the acoustic properties of materials are a function of their stiffnesses, it was thought that the AE data from the two sets might also be similar; however, this was not the case. Therefore, the three resin types were categorized using dummy variables, which allowed the prediction of burst pressures all three sets of bottles using a single neural network. Three bottles from each set were used to train the network. The resin category, the AE amplitude distribution data taken up to 25 % of the expected burst pressure, and the actual burst pressures were used as inputs. Architecturally, the network consisted of a forty-three neuron input layer (a single categorical variable defining the resin type plus forty-two continuous variables for the AE amplitude frequencies), a fifteen neuron hidden layer for mapping, and a single output neuron for burst pressure prediction. The network trained on all three bottle sets was able to predict burst pressures in the remaining bottles with a worst case error of + 6.59%, slightly greater than the desired goal of + 5%. This larger than desired error was due to poor resolution in the amplitude data for the third bottle set. When the third set of bottles was eliminated from consideration, only four hidden layer neurons were necessary to generate a worst case prediction error of - 3.43%, well within the desired goal.
Control techniques to improve Space Shuttle solid rocket booster separation
NASA Technical Reports Server (NTRS)
Tomlin, D. D.
1983-01-01
The present Space Shuttle's control system does not prevent the Orbiter's main engines from being in gimbal positions that are adverse to solid rocket booster separation. By eliminating the attitude error and attitude rate feedback just prior to solid rocket booster separation, the detrimental effects of the Orbiter's main engines can be reduced. In addition, if angular acceleration feedback is applied, the gimbal torques produced by the Orbiter's engines can reduce the detrimental effects of the aerodynamic torques. This paper develops these control techniques and compares the separation capability of the developed control systems. Currently with the worst case initial conditions and each Shuttle system dispersion aligned in the worst direction (which is more conservative than will be experienced in flight), the solid rocket booster has an interference with the Shuttle's external tank of 30 in. Elimination of the attitude error and attitude rate feedback reduces that interference to 19 in. Substitution of angular acceleration feedback reduces the interference to 6 in. The two latter interferences can be eliminated by atess conservative analysis techniques, that is, by using a root sum square of the system dispersions.
[Cognitive errors in diagnostic decision making].
Gäbler, Martin
2017-10-01
Approximately 10-15% of our diagnostic decisions are faulty and may lead to unfavorable and dangerous outcomes, which could be avoided. These diagnostic errors are mainly caused by cognitive biases in the diagnostic reasoning process.Our medical diagnostic decision-making is based on intuitive "System 1" and analytical "System 2" diagnostic decision-making and can be deviated by unconscious cognitive biases.These deviations can be positively influenced on a systemic and an individual level. For the individual, metacognition (internal withdrawal from the decision-making process) and debiasing strategies, such as verification, falsification and rule out worst-case scenarios, can lead to improved diagnostic decisions making.
SU-F-BRD-05: Robustness of Dose Painting by Numbers in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montero, A Barragan; Sterpin, E; Lee, J
Purpose: Proton range uncertainties may cause important dose perturbations within the target volume, especially when steep dose gradients are present as in dose painting. The aim of this study is to assess the robustness against setup and range errors for high heterogeneous dose prescriptions (i.e., dose painting by numbers), delivered by proton pencil beam scanning. Methods: An automatic workflow, based on MATLAB functions, was implemented through scripting in RayStation (RaySearch Laboratories). It performs a gradient-based segmentation of the dose painting volume from 18FDG-PET images (GTVPET), and calculates the dose prescription as a linear function of the FDG-uptake value on eachmore » voxel. The workflow was applied to two patients with head and neck cancer. Robustness against setup and range errors of the conventional PTV margin strategy (prescription dilated by 2.5 mm) versus CTV-based (minimax) robust optimization (2.5 mm setup, 3% range error) was assessed by comparing the prescription with the planned dose for a set of error scenarios. Results: In order to ensure dose coverage above 95% of the prescribed dose in more than 95% of the GTVPET voxels while compensating for the uncertainties, the plans with a PTV generated a high overdose. For the nominal case, up to 35% of the GTVPET received doses 5% beyond prescription. For the worst of the evaluated error scenarios, the volume with 5% overdose increased to 50%. In contrast, for CTV-based plans this 5% overdose was present only in a small fraction of the GTVPET, which ranged from 7% in the nominal case to 15% in the worst of the evaluated scenarios. Conclusion: The use of a PTV leads to non-robust dose distributions with excessive overdose in the painted volume. In contrast, robust optimization yields robust dose distributions with limited overdose. RaySearch Laboratories is sincerely acknowledged for providing us with RayStation treatment planning system and for the support provided.« less
Derivation and experimental verification of clock synchronization theory
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
1994-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin
2007-04-01
This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.
Improved correction for the tissue fraction effect in lung PET/CT imaging
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Millner, Lynn; Hutton, Brian F.; Maher, Toby M.; Groves, Ashley M.; Thielemans, Kris
2015-09-01
Recently, there has been an increased interest in imaging different pulmonary disorders using PET techniques. Previous work has shown, for static PET/CT, that air content in the lung influences reconstructed image values and that it is vital to correct for this ‘tissue fraction effect’ (TFE). In this paper, we extend this work to include the blood component and also investigate the TFE in dynamic imaging. CT imaging and PET kinetic modelling are used to determine fractional air and blood voxel volumes in six patients with idiopathic pulmonary fibrosis. These values are used to illustrate best and worst case scenarios when interpreting images without correcting for the TFE. In addition, the fractional volumes were used to determine correction factors for the SUV and the kinetic parameters. These were then applied to the patient images. The kinetic parameters K1 and Ki along with the static parameter SUV were all found to be affected by the TFE with both air and blood providing a significant contribution to the errors. Without corrections, errors range from 34-80% in the best case and 29-96% in the worst case. In the patient data, without correcting for the TFE, regions of high density (fibrosis) appeared to have a higher uptake than lower density (normal appearing tissue), however this was reversed after air and blood correction. The proposed correction methods are vital for quantitative and relative accuracy. Without these corrections, images may be misinterpreted.
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
SU-E-T-551: PTV Is the Worst-Case of CTV in Photon Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrington, D; Liu, W; Park, P
2014-06-01
Purpose: To examine the supposition of the static dose cloud and adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution for photon therapy in head and neck (H and N) plans. Methods: Five diverse H and N plans clinically delivered at our institution were selected. Isocenter for each plan was shifted positively and negatively in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (3 mm) for a total of six shifted plans per original plan. The perturbed plan dose was recalculated inmore » Eclipse (AAA v11.0.30) using the same, fixed fluence map as the original plan. The dose distributions for all plans were exported from the treatment planning system to determine the worst-case CTV dose distributions for each nominal plan. Two worst-case distributions, cold and hot, were defined by selecting the minimum or maximum dose per voxel from all the perturbed plans. The resulting dose volume histograms (DVH) were examined to evaluate the worst-case CTV and nominal PTV dose distributions. Results: Inspection demonstrates that the CTV DVH in the nominal dose distribution is indeed bounded by the CTV DVHs in the worst-case dose distributions. Furthermore, comparison of the D95% for the worst-case (cold) CTV and nominal PTV distributions by Pearson's chi-square test shows excellent agreement for all plans. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the five plans under examination. Although the worst-case dose distributions are unphysical since the dose per voxel is chosen independently, the cold worst-case distribution serves as a lower bound for the worst-case possible CTV coverage. Minor discrepancies between the nominal PTV dose distribution and worst-case CTV dose distribution are expected since the dose cloud is not strictly static. This research was supported by the NCI through grant K25CA168984, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, and by the Fraternal Order of Eagles Cancer Research Fund, the Career Development Award Program at Mayo Clinic.« less
Time Safety Margin: Theory and Practice
2016-09-01
Basic Dive Recovery Terminology The Simplest Definition of TSM: Time Safety Margin is the time to directly travel from the worst-case vector to an...Safety Margin (TSM). TSM is defined as the time in seconds to directly travel from the worst case vector (i.e. worst case combination of parameters...invoked by this AFI, base recovery planning and risk management upon the calculated TSM. TSM is the time in seconds to di- rectly travel from the worst case
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, A; Viswanathan, A; Cormack, R
2015-06-15
Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less
Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.
Mohammed, Nazmi A; Elkarim, Mohammed Abd
2015-08-10
This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.
NASA Astrophysics Data System (ADS)
Kocan, M.; Garcia-Munoz, M.; Ayllon-Guerola, J.; Bertalot, L.; Bonnet, Y.; Casal, N.; Galdon, J.; Garcia-Lopez, J.; Giacomin, T.; Gonzalez-Martin, J.; Gunn, J. P.; Rodriguez-Ramos, M.; Reichle, R.; Rivero-Rodriguez, J. F.; Sanchis-Sanchez, L.; Vayakis, G.; Veshchev, E.; Vorpahl, C.; Walsh, M.; Walton, R.
2017-12-01
Thermal plasma loads to the ITER Fast Ion Loss Detector are studied for QDT = 10 burning plasma equilibrium using the 3D field line tracing. The simulations are performed for a FILD insertion 9-13 cm past the port plasma facing surface, optimized for fast ion measurements, and include the worst-case perturbation of the plasma boundary and the error in the magnetic reconstruction. The FILD head is exposed to superimposed time-averaged ELM heat load, static inter-ELM heat flux and plasma radiation. The study includes the estimate of the instantaneous temperature rise due to individual 0.6 MJ controlled ELMs. The maximum time-averaged surface heat load is lesssim 12 MW/m2 and will lead to increase of the FILD surface temperature well below the melting temperature of the materials considered here, for the FILD insertion time of 0.2 s. The worst-case instantaneous temperature rise during controlled 0.6 MJ ELMs is also significantly smaller than the melting temperature of e.g. Tungsten or Molybdenum, foreseen for the FILD housing.
Specifying design conservatism: Worst case versus probabilistic analysis
NASA Technical Reports Server (NTRS)
Miles, Ralph F., Jr.
1993-01-01
Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 2 2012-07-01 2012-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
30 CFR 253.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2010 CFR
2010-07-01
...: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000 bbls but not more than... must demonstrate OSFR in accordance with the following table: COF worst case oil-spill discharge volume... applicable table in paragraph (b)(1) or (b)(2) for a facility with a potential worst case oil-spill discharge...
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 2 2013-07-01 2013-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 2 2014-07-01 2014-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...
30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 2 2011-07-01 2011-07-01 false How do I determine the worst case oil-spill... ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To...
30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How do I determine the worst case oil-spill... INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate the amount...
Lower bound for LCD image quality
NASA Astrophysics Data System (ADS)
Olson, William P.; Balram, Nikhil
1996-03-01
The paper presents an objective lower bound for the discrimination of patterns and fine detail in images on a monochrome LCD. In applications such as medical imaging and military avionics the information of interest is often at the highest frequencies in the image. Since LCDs are sampled data systems, their output modulation is dependent on the phase between the input signal and the sampling points. This phase dependence becomes particularly significant at high spatial frequencies. In order to use an LCD for applications such as those mentioned above it is essential to have a lower (worst case) bound on the performance of the display. We address this problem by providing a mathematical model for the worst case output modulation of an LCD in response to a sine wave input. This function can be interpreted as a worst case modulation transfer function (MTF). The intersection of the worst case MTF with the contrast threshold function (CTF) of the human visual system defines the highest spatial frequency that will always be detectable. In addition to providing the worst case limiting resolution, this MTF is combined with the CTF to produce objective worst case image quality values using the modulation transfer function area (MTFA) metric.
Probabilistic Solar Energetic Particle Models
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.
2011-01-01
To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.
Zhu, Zhengfei; Liu, Wei; Gillin, Michael; Gomez, Daniel R; Komaki, Ritsuko; Cox, James D; Mohan, Radhe; Chang, Joe Y
2014-05-06
We assessed the robustness of passive scattering proton therapy (PSPT) plans for patients in a phase II trial of PSPT for stage III non-small cell lung cancer (NSCLC) by using the worst-case scenario method, and compared the worst-case dose distributions with the appearance of locally recurrent lesions. Worst-case dose distributions were generated for each of 9 patients who experienced recurrence after concurrent chemotherapy and PSPT to 74 Gy(RBE) for stage III NSCLC by simulating and incorporating uncertainties associated with set-up, respiration-induced organ motion, and proton range in the planning process. The worst-case CT scans were then fused with the positron emission tomography (PET) scans to locate the recurrence. Although the volumes enclosed by the prescription isodose lines in the worst-case dose distributions were consistently smaller than enclosed volumes in the nominal plans, the target dose coverage was not significantly affected: only one patient had a recurrence outside the prescription isodose lines in the worst-case plan. PSPT is a relatively robust technique. Local recurrence was not associated with target underdosage resulting from estimated uncertainties in 8 of 9 cases.
NASA Astrophysics Data System (ADS)
Bakker, J. F.; Paulides, M. M.; Christ, A.; Kuster, N.; van Rhoon, G. C.
2011-05-01
In this corrigendum, the authors would like to report typographic errors in figures 3 and 4 and to suggest a brief amendment to section 3.1 to avoid further misunderstandings. Figures 3 and 4: the y-axis tick should read 0.1 instead of 1 in both figure 3 (top) and figure 4 (top). In figure 3 (top), the title should be changed to 'SARwb' instead of 'SARwb,max'. Section 3.1. Numerical uncertainty: the following note should be added at the end of the paragraph or as a footnote: 'In order to obtain a worst-case estimate of the numerical uncertainty (table 4), all components were considered as correlated'. The authors would like to express their sincere apologies for the errors in the manuscript.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl; Kooy, Hanne M.; Heijmen, Ben J.M.
2015-06-01
Purpose: To shorten delivery times of intensity modulated proton therapy by reducing the number of energy layers in the treatment plan. Methods and Materials: We have developed an energy layer reduction method, which was implemented into our in-house-developed multicriteria treatment planning system “Erasmus-iCycle.” The method consisted of 2 components: (1) minimizing the logarithm of the total spot weight per energy layer; and (2) iteratively excluding low-weighted energy layers. The method was benchmarked by comparing a robust “time-efficient plan” (with energy layer reduction) with a robust “standard clinical plan” (without energy layer reduction) for 5 oropharyngeal cases and 5 prostate cases.more » Both plans of each patient had equal robust plan quality, because the worst-case dose parameters of the standard clinical plan were used as dose constraints for the time-efficient plan. Worst-case robust optimization was performed, accounting for setup errors of 3 mm and range errors of 3% + 1 mm. We evaluated the number of energy layers and the expected delivery time per fraction, assuming 30 seconds per beam direction, 10 ms per spot, and 400 Giga-protons per minute. The energy switching time was varied from 0.1 to 5 seconds. Results: The number of energy layers was on average reduced by 45% (range, 30%-56%) for the oropharyngeal cases and by 28% (range, 25%-32%) for the prostate cases. When assuming 1, 2, or 5 seconds energy switching time, the average delivery time was shortened from 3.9 to 3.0 minutes (25%), 6.0 to 4.2 minutes (32%), or 12.3 to 7.7 minutes (38%) for the oropharyngeal cases, and from 3.4 to 2.9 minutes (16%), 5.2 to 4.2 minutes (20%), or 10.6 to 8.0 minutes (24%) for the prostate cases. Conclusions: Delivery times of intensity modulated proton therapy can be reduced substantially without compromising robust plan quality. Shorter delivery times are likely to reduce treatment uncertainties and costs.« less
Dynamic safety assessment of natural gas stations using Bayesian network.
Zarei, Esmaeil; Azadeh, Ali; Khakzad, Nima; Aliabadi, Mostafa Mirzaei; Mohammadfam, Iraj
2017-01-05
Pipelines are one of the most popular and effective ways of transporting hazardous materials, especially natural gas. However, the rapid development of gas pipelines and stations in urban areas has introduced a serious threat to public safety and assets. Although different methods have been developed for risk analysis of gas transportation systems, a comprehensive methodology for risk analysis is still lacking, especially in natural gas stations. The present work is aimed at developing a dynamic and comprehensive quantitative risk analysis (DCQRA) approach for accident scenario and risk modeling of natural gas stations. In this approach, a FMEA is used for hazard analysis while a Bow-tie diagram and Bayesian network are employed to model the worst-case accident scenario and to assess the risks. The results have indicated that the failure of the regulator system was the worst-case accident scenario with the human error as the most contributing factor. Thus, in risk management plan of natural gas stations, priority should be given to the most probable root events and main contribution factors, which have identified in the present study, in order to reduce the occurrence probability of the accident scenarios and thus alleviate the risks. Copyright © 2016 Elsevier B.V. All rights reserved.
Shields, Richard K.; Dudley-Javoroski, Shauna; Boaldin, Kathryn M.; Corey, Trent A.; Fog, Daniel B.; Ruen, Jacquelyn M.
2012-01-01
Objectives To determine (1) the error attributable to external tibia-length measurements by using peripheral quantitative computed tomography (pQCT) and (2) the effect these errors have on scan location and tibia trabecular bone mineral density (BMD) after spinal cord injury (SCI). Design Blinded comparison and criterion standard in matched cohorts. Setting Primary care university hospital. Participants Eight able-bodied subjects underwent tibia length measurement. A separate cohort of 7 men with SCI and 7 able-bodied age-matched male controls underwent pQCT analysis. Interventions Not applicable. Main Outcome Measures The projected worst-case tibia-length–measurement error translated into a pQCT slice placement error of ±3mm. We collected pQCT slices at the distal 4% tibia site, 3mm proximal and 3mm distal to that site, and then quantified BMD error attributable to slice placement. Results Absolute BMD error was greater for able-bodied than for SCI subjects (5.87mg/cm3 vs 4.5mg/cm3). However, the percentage error in BMD was larger for SCI than able-bodied subjects (4.56% vs 2.23%). Conclusions During cross-sectional studies of various populations, BMD differences up to 5% may be attributable to variation in limb-length–measurement error. PMID:17023249
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.
NASA Technical Reports Server (NTRS)
Keckler, C. R.
1980-01-01
A high fidelity digital computer simulation was used to establish the viability of the Annular Suspension and Pointing System (ASPS) for satisfying the pointing and stability requirements of facility class payloads, such as the Solar Optical Telescope, when subjected to the Orbiter disturbance environment. The ASPS and its payload were subjected to disturbances resulting from crew motions in the Orbiter aft flight deck and VRCS thruster firings. Worst case pointing errors of 0.005 arc seconds were experienced under the disturbance environment simulated; this is well within the 0.08 arc seconds requirement specified by the payload.
Inter-satellite links for satellite autonomous integrity monitoring
NASA Astrophysics Data System (ADS)
Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco
2011-01-01
A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.
Proof of Heisenberg's error-disturbance relation.
Busch, Paul; Lahti, Pekka; Werner, Reinhard F
2013-10-18
While the slogan "no measurement without disturbance" has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state.
Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.
Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z
2012-07-01
Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.
On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI.
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2017-06-21
The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.
On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI
NASA Astrophysics Data System (ADS)
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2017-06-01
The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.
Code of Federal Regulations, 2012 CFR
2012-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Code of Federal Regulations, 2014 CFR
2014-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Code of Federal Regulations, 2013 CFR
2013-10-01
... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.
Chi, Ching-Chi; Wang, Shu-Hui
2014-01-01
Compared to conventional therapies, biologics are more effective but expensive in treating psoriasis. To evaluate the efficacy and cost-efficacy of biologic therapies for psoriasis. We conducted a meta-analysis to calculate the efficacy of etanercept, adalimumab, infliximab, and ustekinumab for at least 75% reduction in the Psoriasis Area and Severity Index score (PASI 75) and Physician's Global Assessment clear/minimal (PGA 0/1). The cost-efficacy was assessed by calculating the incremental cost-effectiveness ratio (ICER) per subject achieving PASI 75 and PGA 0/1. The incremental efficacy regarding PASI 75 was 55% (95% confidence interval (95% CI) 38%-72%), 63% (95% CI 59%-67%), 71% (95% CI 67%-76%), 67% (95% CI 62%-73%), and 72% (95% CI 68%-75%) for etanercept, adalimumab, infliximab, and ustekinumab 45 mg and 90 mg, respectively. The corresponding 6-month ICER regarding PASI 75 was $32,643 (best case $24,936; worst case $47,246), $21,315 (best case $20,043; worst case $22,760), $27,782 (best case $25,954; worst case $29,440), $25,055 (best case $22,996; worst case $27,075), and $46,630 (best case $44,765; worst case $49,373), respectively. The results regarding PGA 0/1 were similar. Infliximab and ustekinumab 90 mg had the highest efficacy. Meanwhile, adalimumab had the best cost-efficacy, followed by ustekinumab 45 mg and infliximab.
30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.
Code of Federal Regulations, 2011 CFR
2011-07-01
... associated with the facility. In determining the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your...) For exploratory or development drilling operations, the size of your worst case discharge scenario is...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Wang, X; Li, H
Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less
Systems and methods for circuit lifetime evaluation
NASA Technical Reports Server (NTRS)
Heaps, Timothy L. (Inventor); Sheldon, Douglas J. (Inventor); Bowerman, Paul N. (Inventor); Everline, Chester J. (Inventor); Shalom, Eddy (Inventor); Rasmussen, Robert D. (Inventor)
2013-01-01
Systems and methods for estimating the lifetime of an electrical system in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes iteratively performing Worst Case Analysis (WCA) on a system design with respect to different system lifetimes using a computer to determine the lifetime at which the worst case performance of the system indicates the system will pass with zero margin or fail within a predetermined margin for error given the environment experienced by the system during its lifetime. In addition, performing WCA on a system with respect to a specific system lifetime includes identifying subcircuits within the system, performing Extreme Value Analysis (EVA) with respect to each subcircuit to determine whether the subcircuit fails EVA for the specific system lifetime, when the subcircuit passes EVA, determining that the subcircuit does not fail WCA for the specified system lifetime, when a subcircuit fails EVA performing at least one additional WCA process that provides a tighter bound on the WCA than EVA to determine whether the subcircuit fails WCA for the specified system lifetime, determining that the system passes WCA with respect to the specific system lifetime when all subcircuits pass WCA, and determining that the system fails WCA when at least one subcircuit fails WCA.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
NASA Astrophysics Data System (ADS)
Jacobsen, Gunnar; Xu, Tianhua; Popov, Sergei; Sergeyev, Sergey; Zhang, Yimo
2012-12-01
We present a study of the influence of dispersion induced phase noise for CO-OFDM systems using FFT multiplexing/IFFT demultiplexing techniques (software based). The software based system provides a method for a rigorous evaluation of the phase noise variance caused by Common Phase Error (CPE) and Inter-Carrier Interference (ICI) including - for the first time to our knowledge - in explicit form the effect of equalization enhanced phase noise (EEPN). This, in turns, leads to an analytic BER specification. Numerical results focus on a CO-OFDM system with 10-25 GS/s QPSK channel modulation. A worst case constellation configuration is identified for the phase noise influence and the resulting BER is compared to the BER of a conventional single channel QPSK system with the same capacity as the CO-OFDM implementation. Results are evaluated as a function of transmission distance. For both types of systems, the phase noise variance increases significantly with increasing transmission distance. For a total capacity of 400 (1000) Gbit/s, the transmission distance to have the BER < 10-2 for the worst case CO-OFDM design is less than 800 and 460 km, respectively, whereas for a single channel QPSK system it is less than 1400 and 560 km.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Meier, R.; Eberhardt, P.; Krankowsky, D.; Hodges, R. R.
1994-01-01
In comet P/Halley the abundances of ammonia relative to water reported in the literature differ by about one order of magnitude from roughly 0.1% up to 2%. Different observational techniques seem to have inherent systematic errors. Using the ion mass channels m/q = 19 amu/e, 18 amu/e and 17 amu/e of the Neutral Mass Spectrometer experiment aboard the spacecraft Giotto, we derive a production rate of ammonia of (1.5(sub -0.7)(sup +0.5))% relative to water. Inside the contact surface we can explain our data by a nuclear source only. The uncertainty in our abundance of ammonia is primarily a result of uncertainties in some key reaction coefficients. We discuss in detail these reactions and the range of error indicated results from extreme assumptions in the rate coefficients. From our data, even in the worst case, we can exclude the ammonia abundance to be only of the order of a few per mill.
Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry
2011-03-01
For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.
The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions
Qu, Shaojian; Ji, Ying
2016-01-01
In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512
30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your scenario must discuss how to respond to... drilling operations, the size of your worst case discharge scenario is the daily volume possible from an...
Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements
NASA Astrophysics Data System (ADS)
Appel, Pontus
2005-01-01
For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.
Single-Event Effect Performance of a Conductive-Bridge Memory EEPROM
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Berg, Melanie; Kim, Hak; Phan, Anthony; Figueiredo, Marco; Seidleck, Christina; LaBel, Kenneth
2015-01-01
We investigated the heavy ion single-event effect (SEE) susceptibility of the industry’s first stand-alone memory based on conductive-bridge memory (CBRAM) technology. The device is available as an electrically erasable programmable read-only memory (EEPROM). We found that single-event functional interrupt (SEFI) is the dominant SEE type for each operational mode (standby, dynamic read, and dynamic write/read). SEFIs occurred even while the device is statically biased in standby mode. Worst case SEFIs resulted in errors that filled the entire memory space. Power cycle did not always clear the errors. Thus the corrupted cells had to be reprogrammed in some cases. The device is also vulnerable to bit upsets during dynamic write/read tests, although the frequency of the upsets are relatively low. The linear energy transfer threshold for cell upset is between 10 and 20 megaelectron volts per square centimeter per milligram, with an upper limit cross section of 1.6 times 10(sup -11) square centimeters per bit (95 percent confidence level) at 10 megaelectronvolts per square centimeter per milligram. In standby mode, the CBRAM array appears invulnerable to bit upsets.
Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe
2015-01-01
We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.
2018-02-01
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
Factors associated with disclosure of medical errors by housestaff.
Kronman, Andrea C; Paasche-Orlow, Michael; Orlander, Jay D
2012-04-01
Attributes of the organisational culture of residency training programmes may impact patient safety. Training environments are complex, composed of clinical teams, residency programmes, and clinical units. We examined the relationship between residents' perceptions of their training environment and disclosure of or apology for their worst error. Anonymous, self-administered surveys were distributed to Medicine and Surgery residents at Boston Medical Center in 2005. Surveys asked residents to describe their worst medical error, and to answer selected questions from validated surveys measuring elements of working environments that promote learning from error. Subscales measured the microenvironments of the clinical team, residency programme, and clinical unit. Univariate and bivariate statistical analyses examined relationships between trainee characteristics, their perceived learning environment(s), and their responses to the error. Out of 109 surveys distributed to residents, 99 surveys were returned (91% overall response rate), two incomplete surveys were excluded, leaving 97: 61% internal medicine, 39% surgery, 59% male residents. While 31% reported apologising for the situation associated with the error, only 17% reported disclosing the error to patients and/or family. More male residents disclosed the error than female residents (p=0.04). Surgery residents scored higher on the subscales of safety culture pertaining to the residency programme (p=0.02) and managerial commitment to safety (p=0.05). Our Medical Culture Summary score was positively associated with disclosure (p=0.04) and apology (p=0.05). Factors in the learning environments of residents are associated with responses to medical errors. Organisational safety culture can be measured, and used to evaluate environmental attributes of clinical training that are associated with disclosure of, and apology for, medical error.
Scheid, Anika; Nebel, Markus E
2012-07-09
Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case - without sacrificing much of the accuracy of the results. Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.
2012-01-01
Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms. PMID:22776037
Scatterometer-Calibrated Stability Verification Method
NASA Technical Reports Server (NTRS)
McWatters, Dalia A.; Cheetham, Craig M.; Huang, Shouhua; Fischman, Mark A.; CHu, Anhua J.; Freedman, Adam P.
2011-01-01
The requirement for scatterometer-combined transmit-receive gain variation knowledge is typically addressed by sampling a portion of the transmit signal, attenuating it with a known-stable attenuation, and coupling it into the receiver chain. This way, the gain variations of the transmit and receive chains are represented by this loop-back calibration signal, and can be subtracted from the received remote radar echo. Certain challenges are presented by this process, such as transmit and receive components that are outside of this loop-back path and are not included in this calibration, as well as the impracticality for measuring the transmit and receive chains stability and post fabrication separately, without the resulting measurement errors from the test set up exceeding the requirement for the flight instrument. To cover the RF stability design challenge, the portions of the scatterometer that are not calibrated by the loop-back, (e.g., attenuators, switches, diplexers, couplers, and coaxial cables) are tightly thermally controlled, and have been characterized over temperature to contribute less than 0.05 dB of calibration error over worst-case thermal variation. To address the verification challenge, including the components that are not calibrated by the loop-back, a stable fiber optic delay line (FODL) was used to delay the transmitted pulse, and to route it into the receiver. In this way, the internal loopback signal amplitude variations can be compared to the full transmit/receive external path, while the flight hardware is in the worst-case thermal environment. The practical delay for implementing the FODL is 100 s. The scatterometer pulse width is 1 ms so a test mode was incorporated early in the design phase to scale the 1 ms pulse at 100-Hz pulse repetition interval (PRI), by a factor of 18, to be a 55 s pulse with 556 s PRI. This scaling maintains the duty cycle, thus maintaining a representative thermal state for the RF components. The FODL consists of an RF-modulated fiber-optic transmitter, 20 km SMF- 28 standard single-mode fiber, and a photodetector. Thermoelectric cooling and insulating packaging are used to achieve high thermal stability of the FODL components. The chassis was insulated with 1-in. (.2.5-cm) thermal isolation foam. Nylon rods support the Micarta plate, onto which are mounted four 5-km fiber spool boxes. A copper plate heat sink was mounted on top of the fiber boxes (with thermal grease layer) and screwed onto the thermoelectric cooler plate. Another thermal isolation layer in the middle separates the fiberoptics chamber from the RF electronics components, which are also mounted on a copper plate that is screwed onto another thermoelectric cooler. The scatterometer subsystem fs overall stability was successfully verified to be calibratable to within 0.1 dB error in thermal vacuum (TVAC) testing with the fiber-optic delay line, while the scatterometer temperature was ramped from 10 to 30 C, which is a much larger temperature range than the worst-case expected seasonal variations.
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Taylor, Lauren J; Nabozny, Michael J; Steffens, Nicole M; Tucholka, Jennifer L; Brasel, Karen J; Johnson, Sara K; Zelenski, Amy; Rathouz, Paul J; Zhao, Qianqian; Kwekkeboom, Kristine L; Campbell, Toby C; Schwarze, Margaret L
2017-06-01
Although many older adults prefer to avoid burdensome interventions with limited ability to preserve their functional status, aggressive treatments, including surgery, are common near the end of life. Shared decision making is critical to achieve value-concordant treatment decisions and minimize unwanted care. However, communication in the acute inpatient setting is challenging. To evaluate the proof of concept of an intervention to teach surgeons to use the Best Case/Worst Case framework as a strategy to change surgeon communication and promote shared decision making during high-stakes surgical decisions. Our prospective pre-post study was conducted from June 2014 to August 2015, and data were analyzed using a mixed methods approach. The data were drawn from decision-making conversations between 32 older inpatients with an acute nonemergent surgical problem, 30 family members, and 25 surgeons at 1 tertiary care hospital in Madison, Wisconsin. A 2-hour training session to teach each study-enrolled surgeon to use the Best Case/Worst Case communication framework. We scored conversation transcripts using OPTION 5, an observer measure of shared decision making, and used qualitative content analysis to characterize patterns in conversation structure, description of outcomes, and deliberation over treatment alternatives. The study participants were patients aged 68 to 95 years (n = 32), 44% of whom had 5 or more comorbid conditions; family members of patients (n = 30); and surgeons (n = 17). The median OPTION 5 score improved from 41 preintervention (interquartile range, 26-66) to 74 after Best Case/Worst Case training (interquartile range, 60-81). Before training, surgeons described the patient's problem in conjunction with an operative solution, directed deliberation over options, listed discrete procedural risks, and did not integrate preferences into a treatment recommendation. After training, surgeons using Best Case/Worst Case clearly presented a choice between treatments, described a range of postoperative trajectories including functional decline, and involved patients and families in deliberation. Using the Best Case/Worst Case framework changed surgeon communication by shifting the focus of decision-making conversations from an isolated surgical problem to a discussion about treatment alternatives and outcomes. This intervention can help surgeons structure challenging conversations to promote shared decision making in the acute setting.
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 28 2011-07-01 2011-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 29 2012-07-01 2012-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 29 2013-07-01 2013-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
40 CFR 300.324 - Response to worst case discharges.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 28 2014-07-01 2014-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 1
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
Electrical characterization and qualification tests were performed on the RCA MWS5001D, 1024 by 1-bit, CMOS, random access memory. Characterization tests were performed on five devices. The tests included functional tests, AC parametric worst case pattern selection test, determination of worst-case transition for setup and hold times and a series of schmoo plots. The qualification tests were performed on 32 devices and included a 2000 hour burn in with electrical tests performed at 0 hours and after 168, 1000, and 2000 hours of burn in. The tests performed included functional tests and AC and DC parametric tests. All of the tests in the characterization phase, with the exception of the worst-case transition test, were performed at ambient temperatures of 25, -55 and 125 C. The worst-case transition test was performed at 25 C. The preburn in electrical tests were performed at 25, -55, and 125 C. All burn in endpoint tests were performed at 25, -40, -55, 85, and 125 C.
Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
Wang, Zhen; Jin, Bingwen; Geng, Weidong
2017-01-01
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Query Optimization in Distributed Databases.
1982-10-01
general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are
End-to-end commissioning demonstration of the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Acton, D. Scott; Towell, Timothy; Schwenker, John; Shields, Duncan; Sabatke, Erin; Contos, Adam R.; Hansen, Karl; Shi, Fang; Dean, Bruce; Smith, Scott
2007-09-01
The one-meter Testbed Telescope (TBT) has been developed at Ball Aerospace to facilitate the design and implementation of the wavefront sensing and control (WFSC) capabilities of the James Webb Space Telescope (JWST). We have recently conducted an "end-to-end" demonstration of the flight commissioning process on the TBT. This demonstration started with the Primary Mirror (PM) segments and the Secondary Mirror (SM) in random positions, traceable to the worst-case flight deployment conditions. The commissioning process detected and corrected the deployment errors, resulting in diffraction-limited performance across the entire science FOV. This paper will describe the commissioning demonstration and the WFSC algorithms used at each step in the process.
Conservative classical and quantum resolution limits for incoherent imaging
NASA Astrophysics Data System (ADS)
Tsang, Mankei
2018-06-01
I propose classical and quantum limits to the statistical resolution of two incoherent optical point sources from the perspective of minimax parameter estimation. Unlike earlier results based on the Cramér-Rao bound (CRB), the limits proposed here, based on the worst-case error criterion and a Bayesian version of the CRB, are valid for any biased or unbiased estimator and obey photon-number scalings that are consistent with the behaviours of actual estimators. These results prove that, from the minimax perspective, the spatial-mode demultiplexing measurement scheme recently proposed by Tsang, Nair, and Lu [Phys. Rev. X 2016, 6 031033.] remains superior to direct imaging for sufficiently high photon numbers.
ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*
Kim, Donghwan; Fessler, Jeffrey A.
2017-01-01
This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242
Grieger, Khara D; Hansen, Steffen F; Sørensen, Peter B; Baun, Anders
2011-09-01
Conducting environmental risk assessment of engineered nanomaterials has been an extremely challenging endeavor thus far. Moreover, recent findings from the nano-risk scientific community indicate that it is unlikely that many of these challenges will be easily resolved in the near future, especially given the vast variety and complexity of nanomaterials and their applications. As an approach to help optimize environmental risk assessments of nanomaterials, we apply the Worst-Case Definition (WCD) model to identify best estimates for worst-case conditions of environmental risks of two case studies which use engineered nanoparticles, namely nZVI in soil and groundwater remediation and C(60) in an engine oil lubricant. Results generated from this analysis may ultimately help prioritize research areas for environmental risk assessments of nZVI and C(60) in these applications as well as demonstrate the use of worst-case conditions to optimize future research efforts for other nanomaterials. Through the application of the WCD model, we find that the most probable worst-case conditions for both case studies include i) active uptake mechanisms, ii) accumulation in organisms, iii) ecotoxicological response mechanisms such as reactive oxygen species (ROS) production and cell membrane damage or disruption, iv) surface properties of nZVI and C(60), and v) acute exposure tolerance of organisms. Additional estimates of worst-case conditions for C(60) also include the physical location of C(60) in the environment from surface run-off, cellular exposure routes for heterotrophic organisms, and the presence of light to amplify adverse effects. Based on results of this analysis, we recommend the prioritization of research for the selected applications within the following areas: organism active uptake ability of nZVI and C(60) and ecotoxicological response end-points and response mechanisms including ROS production and cell membrane damage, full nanomaterial characterization taking into account detailed information on nanomaterial surface properties, and investigations of dose-response relationships for a variety of organisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN.
Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís
2018-01-11
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN
Pedreiras, Paulo; Almeida, Luís
2018-01-01
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723
Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects
ERIC Educational Resources Information Center
Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor
2011-01-01
Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2014 CFR
2014-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2012 CFR
2012-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
30 CFR 553.13 - How much OSFR must I demonstrate?
Code of Federal Regulations, 2013 CFR
2013-07-01
... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...
41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...
41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Alpha Collaboration; Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-04-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5% worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime. PMID:23653197
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-01-01
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-12-19
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.
Charman, A E; Amole, C; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Butler, E; Capra, A; Cesar, C L; Charlton, M; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Isaac, C A; Jonsell, S; Kurchaninov, L; Little, A; Madsen, N; McKenna, J T K; Menary, S; Napoli, S C; Nolan, P; Olin, A; Pusa, P; Rasmussen, C Ø; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Thompson, R I; van der Werf, D P; Wurtele, J S; Zhmoginov, A I
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Defect tolerance in resistor-logic demultiplexers for nanoelectronics.
Kuekes, Philip J; Robinett, Warren; Williams, R Stanley
2006-05-28
Since defect rates are expected to be high in nanocircuitry, we analyse the performance of resistor-based demultiplexers in the presence of defects. The defects observed to occur in fabricated nanoscale crossbars are stuck-open, stuck-closed, stuck-short, broken-wire, and adjacent-wire-short defects. We analyse the distribution of voltages on the nanowire output lines of a resistor-logic demultiplexer, based on an arbitrary constant-weight code, when defects occur. These analyses show that resistor-logic demultiplexers can tolerate small numbers of stuck-closed, stuck-open, and broken-wire defects on individual nanowires, at the cost of some degradation in the circuit's worst-case voltage margin. For stuck-short and adjacent-wire-short defects, and for nanowires with too many defects of the other types, the demultiplexer can still achieve error-free performance, but with a smaller set of output lines. This design thus has two layers of defect tolerance: the coding layer improves the yield of usable output lines, and an avoidance layer guarantees that error-free performance is achieved.
A game theory approach to target tracking in sensor networks.
Gu, Dongbing
2011-02-01
In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor
2016-07-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám
2016-01-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations. PMID:27493566
From Usability Engineering to Evidence-based Usability in Health IT.
Marcilly, Romaric; Peute, Linda; Beuscart-Zephir, Marie-Catherine
2016-01-01
Usability is a critical factor in the acceptance, safe use, and success of health IT. The User-Centred Design process is widely promoted to improve usability. However, this traditional case by case approach that is rooted in the sound understanding of users' needs is not sufficient to improve technologies' usability and prevent usability-induced use-errors that may harm patients. It should be enriched with empirical evidence. This evidence is on design elements (what are the most valuable design principles, and the worst usability mistakes), and on the usability evaluation methods (which combination of methods is most suitable in which context). To achieve this evidence, several steps must be fulfilled and challenges must be overcome. Some attempts to search evidence for designing elements of health IT and for usability evaluation methods exist and are summarized. A concrete instance of evidence-based usability design principles for medication-related alerting systems is briefly described.
Adaptive Estimation of Multiple Fading Factors for GPS/INS Integrated Navigation Systems.
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2017-06-01
The Kalman filter has been widely applied in the field of dynamic navigation and positioning. However, its performance will be degraded in the presence of significant model errors and uncertain interferences. In the literature, the fading filter was proposed to control the influences of the model errors, and the H-infinity filter can be adopted to address the uncertainties by minimizing the estimation error in the worst case. In this paper, a new multiple fading factor, suitable for the Global Positioning System (GPS) and the Inertial Navigation System (INS) integrated navigation system, is proposed based on the optimization of the filter, and a comprehensive filtering algorithm is constructed by integrating the advantages of the H-infinity filter and the proposed multiple fading filter. Measurement data of the GPS/INS integrated navigation system are collected under actual conditions. Stability and robustness of the proposed filtering algorithm are tested with various experiments and contrastive analysis are performed with the measurement data. Results demonstrate that both the filter divergence and the influences of outliers are restrained effectively with the proposed filtering algorithm, and precision of the filtering results are improved simultaneously.
Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A
2015-01-01
The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.
Less than severe worst case accidents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, G.A.
1996-08-01
Many systems can provide tremendous benefit if operating correctly, produce only an inconvenience if they fail to operate, but have extreme consequences if they are only partially disabled such that they operate erratically or prematurely. In order to assure safety, systems are often tested against the most severe environments and accidents that are considered possible to ensure either safe operation or safe failure. However, it is often the less severe environments which result in the ``worst case accident`` since these are the conditions in which part of the system may be exposed or rendered unpredictable prior to total system failure.more » Some examples of less severe mechanical, thermal, and electrical environments which may actually be worst case are described as cautions for others in industries with high consequence operations or products.« less
Optimal and robust control of transition
NASA Technical Reports Server (NTRS)
Bewley, T. R.; Agarwal, R.
1996-01-01
Optimal and robust control theories are used to determine feedback control rules that effectively stabilize a linearly unstable flow in a plane channel. Wall transpiration (unsteady blowing/suction) with zero net mass flux is used as the control. Control algorithms are considered that depend both on full flowfield information and on estimates of that flowfield based on wall skin-friction measurements only. The development of these control algorithms accounts for modeling errors and measurement noise in a rigorous fashion; these disturbances are considered in both a structured (Gaussian) and unstructured ('worst case') sense. The performance of these algorithms is analyzed in terms of the eigenmodes of the resulting controlled systems, and the sensitivity of individual eigenmodes to both control and observation is quantified.
Neural Network Prediction of Aluminum-Lithium Weld Strengths from Acoustic Emission Amplitude Data
NASA Technical Reports Server (NTRS)
Hill, Eric v. K.; Israel, Peggy L.; Knotts, Gregory L.
1993-01-01
Acoustic Emission (AE) flaw growth activity was monitored in aluminum-lithium weld specimens from the onset tensile loading to failure. Data on actual ultimate strengths together with AE data from the beginning of loading up to 25 percent of the expected ultimate strength were used to train a backpropagation neural network to predict ultimate strengths. Architecturally, the fully interconnected network consisted of an input layer for the AE amplitude data, a hidden layer to accommodate failure mechanism mapping, and an output layer for ultimate strength prediction. The trained network was the applied to the prediction of ultimate strengths in the remaining six specimens. The worst case prediction error was found to be +2.6 percent.
Integrated Optoelectronic Networks for Application-Driven Multicore Computing
2017-05-08
hybrid photonic torus, the all-optical Corona crossbar, and the hybrid hierarchical Firefly crossbar. • The key challenges for waveguide photonics...improves SXR but with relatively higher EDP overhead. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and...photonic crossbar architectures ( Corona and Firefly) indicate that our approach improves worst-case signal-to-noise ratio (SNR) by up to 51.7
Method of Generating Transient Equivalent Sink and Test Target Temperatures for Swift BAT
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2004-01-01
The NASA Swift mission has a 600-km altitude and a 22 degrees maximum inclination. The sun angle varies from 45 degrees to 180 degrees in normal operation. As a result, environmental heat fluxes absorbed by the Burst Alert Telescope (BAT) radiator and loop heat pipe (LHP) compensation chambers (CCs) vary transiently. Therefore the equivalent sink temperatures for the radiator and CCs varies transiently. In thermal performance verification testing in vacuum, the radiator and CCs radiated heat to sink targets. This paper presents an analytical technique for generating orbit transient equivalent sink temperatures and a technique for generating transient sink target temperatures for the radiator and LHP CCs. Using these techniques, transient target temperatures for the radiator and LHP CCs were generated for three thermal environmental cases: worst hot case, worst cold case, and cooldown and warmup between worst hot case in sunlight and worst cold case in the eclipse, and three different heat transport values: 128 W, 255 W, and 382 W. The 128 W case assumed that the two LHPs transport 255 W equally to the radiator. The 255 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator. The 382 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator, and has a 50% design margin. All these transient target temperatures were successfully implemented in the engineering test unit (ETU) LHP and flight LHP thermal performance verification tests in vacuum.
NASA Technical Reports Server (NTRS)
Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert
2004-01-01
The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.
1991-01-01
EXPERIENCE IN DEVELOPING INTEGRATED OPTICAL DEVICES, NONLINEAR MAGNETIC-OPTIC MATERIALS, HIGH FREQUENCY MODULATORS, COMPUTER-AIDED MODELING AND SOPHISTICATED... HIGH -LEVEL PRESENTATION AND DISTRIBUTED CONTROL MODELS FOR INTEGRATING HETEROGENEOUS MECHANICAL ENGINEERING APPLICATIONS AND TOOLS. THE DESIGN IS FOCUSED...STATISTICALLY ACCURATE WORST CASE DEVICE MODELS FOR CIRCUIT SIMULATION. PRESENT METHODS OF WORST CASE DEVICE DESIGN ARE AD HOC AND DO NOT ALLOW THE
Enjolras, Vivien; Vincent, Patrick; Souyris, Jean-Claude; Rodriguez, Ernesto; Phalippou, Laurent; Cazenave, Anny
2006-01-01
The main limitations of standard nadir-looking radar altimeters have been known for long. They include the lack of coverage (intertrack distance of typically 150 km for the T/P / Jason tandem), and the spatial resolution (typically 2 km for T/P and Jason), expected to be a limiting factor for the determination of mesoscale phenomena in deep ocean. In this context, various solutions using off-nadir radar interferometry have been proposed by Rodriguez and al to give an answer to oceanographic mission objectives. This paper addresses the performances study of this new generation of instruments, and dedicated mission. A first approach is based on the Wide-Swath Ocean Altimeter (WSOA) intended to be implemented onboard Jason-2 in 2004 but now abandoned. Every error domain has been checked: the physics of the measurement, its geometry, the impact of the platform and external errors like the tropospheric and ionospheric delays. We have especially shown the strong need to move to a sun-synchronous orbit and the non-negligible impact of propagation media errors in the swath, reaching a few centimetres in the worst case. Some changes in the parameters of the instrument have also been discussed to improve the overall error budget. The outcomes have led to the definition and the optimization of such an instrument and its dedicated mission.
[Evaluation of four dark object atmospheric correction methods based on ZY-3 CCD data].
Guo, Hong; Gu, Xing-fa; Xie, Yong; Yu, Tao; Gao, Hai-liang; Wei, Xiang-qin; Liu, Qi-yue
2014-08-01
The present paper performed the evaluation of four dark-object subtraction(DOS) atmospheric correction methods based on 2012 Inner Mongolia experimental data The authors analyzed the impacts of key parameters of four DOS methods when they were applied to ZY-3 CCD data The results showed that (1) All four DOS methods have significant atmospheric correction effect at band 1, 2 and 3. But as for band 4, the atmospheric correction effect of DOS4 is the best while DOS2 is the worst; both DOS1 and DOS3 has no obvious atmospheric correction effect. (2) The relative error (RE) of DOS1 atmospheric correction method is larger than 10% at four bands; The atmospheric correction effect of DOS2 works the best at band 1(AE (absolute error)=0.0019 and RE=4.32%) and the worst error appears at band 4(AE=0.0464 and RE=19.12%); The RE of DOS3 is about 10% for all bands. (3) The AE of atmospheric correction results for DOS4 method is less than 0. 02 and the RE is less than 10% for all bands. Therefore, the DOS4 method provides the best accuracy of atmospheric correction results for ZY-3 image.
Validation of GPU based TomoTherapy dose calculation engine.
Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond
2012-04-01
The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) < 1. The worst case observed in the phantom had 0.22% voxels violating the criterion. In patient cases, the worst percentage of voxels violating the criterion was 0.57%. For absolute point dose verification, all cases agreed with measurement to within ±3% with average error magnitude within 1%. All cases passed the acceptance criterion that more than 95% of the pixels have Γ(3%, 3 mm) < 1 in film measurement, and the average passing pixel percentage is 98.5%-99%. The GPU dose engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.
Local measles vaccination gaps in Germany and the role of vaccination providers.
Eichner, Linda; Wjst, Stephanie; Brockmann, Stefan O; Wolfers, Kerstin; Eichner, Martin
2017-08-14
Measles elimination in Europe is an urgent public health goal, yet despite the efforts of its member states, vaccination gaps and outbreaks occur. This study explores local vaccination heterogeneity in kindergartens and municipalities of a German county. Data on children from mandatory school enrolment examinations in 2014/15 in Reutlingen county were used. Children with unknown vaccination status were either removed from the analysis (best case) or assumed to be unvaccinated (worst case). Vaccination data were translated into expected outbreak probabilities. Physicians and kindergartens with statistically outstanding numbers of under-vaccinated children were identified. A total of 170 (7.1%) of 2388 children did not provide a vaccination certificate; 88.3% (worst case) or 95.1% (best case) were vaccinated at least once against measles. Based on the worst case vaccination coverage, <10% of municipalities and <20% of kindergartens were sufficiently vaccinated to be protected against outbreaks. Excluding children without a vaccination certificate (best case) leads to over-optimistic views: the overall outbreak probability in case of a measles introduction lies between 39.5% (best case) and 73.0% (worst case). Four paediatricians were identified who accounted for 41 of 109 unvaccinated children and for 47 of 138 incomplete vaccinations; GPs showed significantly higher rates of missing vaccination certificates and unvaccinated or under-vaccinated children than paediatricians. Missing vaccination certificates pose a severe problem regarding the interpretability of vaccination data. Although the coverage for at least one measles vaccination is higher in the studied county than in most South German counties and higher than the European average, many severe and potentially dangerous vaccination gaps occur locally. If other federal German states and EU countries show similar vaccination variability, measles elimination may not succeed in Europe.
Command and Control Software Development Memory Management
NASA Technical Reports Server (NTRS)
Joseph, Austin Pope
2017-01-01
This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.
Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors
NASA Astrophysics Data System (ADS)
Marti, Alejandro; Folch, Arnau
2018-03-01
Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally efficient online dispersal models.
Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios
NASA Astrophysics Data System (ADS)
Kluess, D.; Mittelmeier, W.; Bader, R.
2009-12-01
In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.
Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios
NASA Astrophysics Data System (ADS)
Kluess, D.; Mittelmeier, W.; Bader, R.
2010-03-01
In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.
Dima, Giovanna; Verzera, Antonella; Grob, Koni
2011-11-01
Party plates made of recycled paperboard with a polyolefin film on the food contact surface (more often polypropylene than polyethylene) were tested for migration of mineral oil into various foods applying reasonable worst case conditions. The worst case was identified as a slice of fried meat placed onto the plate while hot and allowed to cool for 1 h. As it caused the acceptable daily intake (ADI) specified by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) to be exceeded, it is concluded that recycled paperboard is generally acceptable for party plates only when separated from the food by a functional barrier. Migration data obtained with oil as simulant at 70°C was compared to the migration into foods. A contact time of 30 min was found to reasonably cover the worst case determined in food.
A Neural Network/Acoustic Emission Analysis of Impact Damaged Graphite/Epoxy Pressure Vessels
NASA Technical Reports Server (NTRS)
Walker, James L.; Hill, Erik v. K.; Workman, Gary L.; Russell, Samuel S.
1995-01-01
Acoustic emission (AE) signal analysis has been used to measure the effects of impact damage on burst pressure in 5.75 inch diameter, inert propellant filled, filament wound pressure vessels. The AE data were collected from fifteen graphite/epoxy pressure vessels featuring five damage states and three resin systems. A burst pressure prediction model was developed by correlating the AE amplitude (frequency) distribution, generated during the first pressure ramp to 800 psig (approximately 25% of the average expected burst pressure for an undamaged vessel) to known burst pressures using a four layered back propagation neural network. The neural network, trained on three vessels from each resin system, was able to predict burst pressures with a worst case error of 5.7% for the entire fifteen bottle set.
Use of the 4D-Global Reference Atmosphere Model (GRAM) for space shuttle descent design
NASA Technical Reports Server (NTRS)
Mccarty, S. M.
1987-01-01
The method of using the Global Reference Atmosphere Model (GRAM) mean and dispersed atmospheres to study skipout/overshoot requirements, to characterize mean and worst case vehicle temperatures, study control requirements, and verify design was discussed. Landing sites in these analyses range from 65 N to 30 S, while orbit inclinations vary from 20 deg to 98 deg. The primary concern was that they cannot use as small vertical steps in the reentry calculation as desired because the model predicts anomalously large density shear rates for very small vertical step sizes. The winds predicted by the model are not satisfactory. This is probably because they are geostrophic winds and because the model has an error in the computation of winds in the equatorial regions.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
Davis, Michael J; Janke, Robert
2018-01-04
The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.
NASA Astrophysics Data System (ADS)
Davis, Michael J.; Janke, Robert
2018-05-01
The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.
Faerber, Julia; Cummins, Gerard; Pavuluri, Sumanth Kumar; Record, Paul; Rodriguez, Adrian R Ayastuy; Lay, Holly S; McPhillips, Rachael; Cox, Benjamin F; Connor, Ciaran; Gregson, Rachael; Clutton, Richard Eddie; Khan, Sadeque Reza; Cochran, Sandy; Desmulliez, Marc P Y
2018-02-01
This paper describes the design, fabrication, packaging, and performance characterization of a conformal helix antenna created on the outside of a capsule endoscope designed to operate at a carrier frequency of 433 MHz within human tissue. Wireless data transfer was established between the integrated capsule system and an external receiver. The telemetry system was tested within a tissue phantom and in vivo porcine models. Two different types of transmission modes were tested. The first mode, replicating normal operating conditions, used data packets at a steady power level of 0 dBm, while the capsule was being withdrawn at a steady rate from the small intestine. The second mode, replicating the worst-case clinical scenario of capsule retention within the small bowel, sent data with stepwise increasing power levels of -10, 0, 6, and 10 dBm, with the capsule fixed in position. The temperature of the tissue surrounding the external antenna was monitored at all times using thermistors embedded within the capsule shell to observe potential safety issues. The recorded data showed, for both modes of operation, a low error transmission of 10 -3 packet error rate and 10 -5 bit error rate and no temperature increase of the tissue according to IEEE standards.
Sensitivity of worst-case strom surge considering influence of climate change
NASA Astrophysics Data System (ADS)
Takayabu, Izuru; Hibino, Kenshi; Sasaki, Hidetaka; Shiogama, Hideo; Mori, Nobuhito; Shibutani, Yoko; Takemi, Tetsuya
2016-04-01
There are two standpoints when assessing risk caused by climate change. One is how to prevent disaster. For this purpose, we get probabilistic information of meteorological elements, from enough number of ensemble simulations. Another one is to consider disaster mitigation. For this purpose, we have to use very high resolution sophisticated model to represent a worst case event in detail. If we could use enough computer resources to drive many ensemble runs with very high resolution model, we can handle these all themes in one time. However resources are unfortunately limited in most cases, and we have to select the resolution or the number of simulations if we design the experiment. Applying PGWD (Pseudo Global Warming Downscaling) method is one solution to analyze a worst case event in detail. Here we introduce an example to find climate change influence on the worst case storm-surge, by applying PGWD to a super typhoon Haiyan (Takayabu et al, 2015). 1 km grid WRF model could represent both the intensity and structure of a super typhoon. By adopting PGWD method, we can only estimate the influence of climate change on the development process of the Typhoon. Instead, the changes in genesis could not be estimated. Finally, we drove SU-WAT model (which includes shallow water equation model) to get the signal of storm surge height. The result indicates that the height of the storm surge increased up to 20% owing to these 150 years climate change.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 4, Appendix C
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Statistical analysis data is supplied along with write pulse width, read cycle time, write cycle time, and chip enable time data.
Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.
Probabilistic Models for Solar Particle Events
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.
2009-01-01
Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.
A Multidimensional Assessment of Children in Conflictual Contexts: The Case of Kenya
ERIC Educational Resources Information Center
Okech, Jane E. Atieno
2012-01-01
Children in Kenya's Kisumu District Primary Schools (N = 430) completed three measures of trauma. Respondents completed the "My Worst Experience Scale" (MWES; Hyman and Snook 2002) and its supplement, the "School Alienation and Trauma Survey" (SATS; Hyman and Snook 2002), sharing their worst experiences overall and specifically…
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
Visual short-term memory deficits associated with GBA mutation and Parkinson's disease.
Zokaei, Nahid; McNeill, Alisdair; Proukakis, Christos; Beavan, Michelle; Jarman, Paul; Korlipara, Prasad; Hughes, Derralynn; Mehta, Atul; Hu, Michele T M; Schapira, Anthony H V; Husain, Masud
2014-08-01
Individuals with mutation in the lysosomal enzyme glucocerebrosidase (GBA) gene are at significantly high risk of developing Parkinson's disease with cognitive deficit. We examined whether visual short-term memory impairments, long associated with patients with Parkinson's disease, are also present in GBA-positive individuals-both with and without Parkinson's disease. Precision of visual working memory was measured using a serial order task in which participants observed four bars, each of a different colour and orientation, presented sequentially at screen centre. Afterwards, they were asked to adjust a coloured probe bar's orientation to match the orientation of the bar of the same colour in the sequence. An additional attentional 'filtering' condition tested patients' ability to selectively encode one of the four bars while ignoring the others. A sensorimotor task using the same stimuli controlled for perceptual and motor factors. There was a significant deficit in memory precision in GBA-positive individuals-with or without Parkinson's disease-as well as GBA-negative patients with Parkinson's disease, compared to healthy controls. Worst recall was observed in GBA-positive cases with Parkinson's disease. Although all groups were impaired in visual short-term memory, there was a double dissociation between sources of error associated with GBA mutation and Parkinson's disease. The deficit observed in GBA-positive individuals, regardless of whether they had Parkinson's disease, was explained by a systematic increase in interference from features of other items in memory: misbinding errors. In contrast, impairments in patients with Parkinson's disease, regardless of GBA status, was explained by increased random responses. Individuals who were GBA-positive and also had Parkinson's disease suffered from both types of error, demonstrating the worst performance. These findings provide evidence for dissociable signature deficits within the domain of visual short-term memory associated with GBA mutation and with Parkinson's disease. Identification of the specific pattern of cognitive impairment in GBA mutation versus Parkinson's disease is potentially important as it might help to identify individuals at risk of developing Parkinson's disease. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain.
Mühlbacher, Axel C; Kaczynski, Anika; Zweifel, Peter; Johnson, F Reed
2016-12-01
Best-worst scaling (BWS), also known as maximum-difference scaling, is a multiattribute approach to measuring preferences. BWS aims at the analysis of preferences regarding a set of attributes, their levels or alternatives. It is a stated-preference method based on the assumption that respondents are capable of making judgments regarding the best and the worst (or the most and least important, respectively) out of three or more elements of a choice-set. As is true of discrete choice experiments (DCE) generally, BWS avoids the known weaknesses of rating and ranking scales while holding the promise of generating additional information by making respondents choose twice, namely the best as well as the worst criteria. A systematic literature review found 53 BWS applications in health and healthcare. This article expounds possibilities of application, the underlying theoretical concepts and the implementation of BWS in its three variants: 'object case', 'profile case', 'multiprofile case'. This paper contains a survey of BWS methods and revolves around study design, experimental design, and data analysis. Moreover the article discusses the strengths and weaknesses of the three types of BWS distinguished and offered an outlook. A companion paper focuses on special issues of theory and statistical inference confronting BWS in preference measurement.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effect of Impact Location on the Response of Shuttle Wing Leading Edge Panel 9
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Spellman, Regina L.; Hardy, Robin C.; Fasanella, Edwin L.; Jackson, Karen E.
2005-01-01
The objective of this paper is to compare the results of several simulations performed to determine the worst-case location for a foam impact on the Space Shuttle wing leading edge. The simulations were performed using the commercial non-linear transient dynamic finite element code, LS-DYNA. These simulations represent the first in a series of parametric studies performed to support the selection of the worst-case impact scenario. Panel 9 was selected for this study to enable comparisons with previous simulations performed during the Columbia Accident Investigation. The projectile for this study is a 5.5-in cube of typical external tank foam weighing 0.23 lb. Seven locations spanning the panel surface were impacted with the foam cube. For each of these cases, the foam was traveling at 1000 ft/s directly aft, along the orbiter X-axis. Results compared from the parametric studies included strains, contact forces, and material energies for various simulations. The results show that the worst case impact location was on the top surface, near the apex.
Faith, Daniel P.
2015-01-01
The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, E. A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
NASA Technical Reports Server (NTRS)
Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, Edward A.; Gee, G. B.
1999-01-01
The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.
Motl, Robert W; Fernhall, Bo
2012-03-01
To examine the accuracy of predicting peak oxygen consumption (VO(2peak)) primarily from peak work rate (WR(peak)) recorded during a maximal, incremental exercise test on a cycle ergometer among persons with relapsing-remitting multiple sclerosis (RRMS) who had minimal disability. Cross-sectional study. Clinical research laboratory. Women with RRMS (n=32) and sex-, age-, height-, and weight-matched healthy controls (n=16) completed an incremental exercise test on a cycle ergometer to volitional termination. Not applicable. Measured and predicted VO(2peak) and WR(peak). There were strong, statistically significant associations between measured and predicted VO(2peak) in the overall sample (R(2)=.89, standard error of the estimate=127.4 mL/min) and subsamples with (R(2)=.89, standard error of the estimate=131.3 mL/min) and without (R(2)=.85, standard error of the estimate=126.8 mL/min) multiple sclerosis (MS) based on the linear regression analyses. Based on the 95% confidence limits for worst-case errors, the equation predicted VO(2peak) within 10% of its true value in 95 of every 100 subjects with MS. Peak VO(2) can be accurately predicted in persons with RRMS who have minimal disability as it is in controls by using established equations and WR(peak) recorded from a maximal, incremental exercise test on a cycle ergometer. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Dinges, Eric; Felderman, Nicole; McGuire, Sarah; Gross, Brandie; Bhatia, Sudershan; Mott, Sarah; Buatti, John; Wang, Dongxu
2015-01-01
Background and Purpose This study evaluates the potential efficacy and robustness of functional bone marrow sparing (BMS) using intensity-modulated proton therapy (IMPT) for cervical cancer, with the goal of reducing hematologic toxicity. Material and Methods IMPT plans with prescription dose of 45 Gy were generated for ten patients who have received BMS intensity-modulated x-ray therapy (IMRT). Functional bone marrow was identified by 18F-flourothymidine positron emission tomography. IMPT plans were designed to minimize the volume of functional bone marrow receiving 5–40 Gy while maintaining similar target coverage and healthy organ sparing as IMRT. IMPT robustness was analyzed with ±3% range uncertainty errors and/or ±3mm translational setup errors in all three principal dimensions. Results In the static scenario, the median dose volume reductions for functional bone marrow by IMPT were: 32% for V5GY, 47% for V10Gy, 54% for V20Gy, and 57% for V40Gy, all with p<0.01 compared to IMRT. With assumed errors, even the worst-case reductions by IMPT were: 23% for V5Gy, 37% for V10Gy, 41% for V20Gy, and 39% for V40Gy, all with p<0.01. Conclusions The potential sparing of functional bone marrow by IMPT for cervical cancer is significant and robust under realistic systematic range uncertainties and clinically relevant setup errors. PMID:25981130
Resolving Mixed Algal Species in Hyperspectral Images
Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.
2014-01-01
We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451
Robust attitude control design for spacecraft under assigned velocity and control constraints.
Hu, Qinglei; Li, Bo; Zhang, Youmin
2013-07-01
A novel robust nonlinear control design under the constraints of assigned velocity and actuator torque is investigated for attitude stabilization of a rigid spacecraft. More specifically, a nonlinear feedback control is firstly developed by explicitly taking into account the constraints on individual angular velocity components as well as external disturbances. Considering further the actuator misalignments and magnitude deviation, a modified robust least-squares based control allocator is employed to deal with the problem of distributing the previously designed three-axis moments over the available actuators, in which the focus of this control allocation is to find the optimal control vector of actuators by minimizing the worst-case residual error using programming algorithms. The attitude control performance using the controller structure is evaluated through a numerical example. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Analysis of the passive stabilization of the long duration exposure facility
NASA Technical Reports Server (NTRS)
Siegel, S. H.; Vishwanath, N. S.
1977-01-01
The nominal Long Duration Exposure Facility (LDEF) configurations and the anticipated orbit parameters are presented. A linear steady state analysis was performed using these parameters. The effects of orbit eccentricity, solar pressure, aerodynamic pressure, magnetic dipole, and the magnetically anchored rate damper were evaluated to determine the configuration sensitivity to variations in these parameters. The worst case conditions for steady state errors were identified, and the performance capability calculated. Garber instability bounds were evaluated for the range of configuration and damping coefficients under consideration. The transient damping capabilities of the damper were examined, and the time constant as a function of damping coefficient and spacecraft moment of inertia determined. The capture capabilities of the damper were calculated, and the results combined with steady state, transient, and Garber instability analyses to select damper design parameters.
Neural Network Burst Pressure Prediction in Composite Overwrapped Pressure Vessels
NASA Technical Reports Server (NTRS)
Hill, Eric v. K.; Dion, Seth-Andrew T.; Karl, Justin O.; Spivey, Nicholas S.; Walker, James L., II
2007-01-01
Acoustic emission data were collected during the hydroburst testing of eleven 15 inch diameter filament wound composite overwrapped pressure vessels. A neural network burst pressure prediction was generated from the resulting AE amplitude data. The bottles shared commonality of graphite fiber, epoxy resin, and cure time. Individual bottles varied by cure mode (rotisserie versus static oven curing), types of inflicted damage, temperature of the pressurant, and pressurization scheme. Three categorical variables were selected to represent undamaged bottles, impact damaged bottles, and bottles with lacerated hoop fibers. This categorization along with the removal of the AE data from the disbonding noise between the aluminum liner and the composite overwrap allowed the prediction of burst pressures in all three sets of bottles using a single backpropagation neural network. Here the worst case error was 3.38 percent.
Stochastic Formal Correctness of Numerical Algorithms
NASA Technical Reports Server (NTRS)
Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick
2009-01-01
We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Alfa, M J; Olson, N
2016-05-01
To determine which simulated-use test soils met the worst-case organic levels and viscosity of clinical secretions, and had the best adhesive characteristics. Levels of protein, carbohydrate and haemoglobin, and vibrational viscosity of clinical endoscope secretions were compared with test soils including ATS, ATS2015, Edinburgh, Edinburgh-M (modified), Miles, 10% serum and coagulated whole blood. ASTM D3359 was used for adhesion testing. Cleaning of a single-channel flexible intubation endoscope was tested after simulated use. The worst-case levels of protein, carbohydrate and haemoglobin, and viscosity of clinical material were 219,828μg/mL, 9296μg/mL, 9562μg/mL and 6cP, respectively. Whole blood, ATS2015 and Edinburgh-M were pipettable with viscosities of 3.4cP, 9.0cP and 11.9cP, respectively. ATS2015 and Edinburgh-M best matched the worst-case clinical parameters, but ATS had the best adhesion with 7% removal (36.7% for Edinburgh-M). Edinburgh-M and ATS2015 showed similar soiling and removal characteristics from the surface and lumen of a flexible intubation endoscope. Of the test soils evaluated, ATS2015 and Edinburgh-M were found to be good choices for the simulated use of endoscopes, as their composition and viscosity most closely matched worst-case clinical material. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Olson, Scott A.
1996-01-01
Contraction scour for all modelled flows ranged from 0.1 to 3.1 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour at the left abutment ranged from 10.4 to 12.5 ft with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 25.3 to 27.3 ft with the worst-case occurring at the incipient-overtopping discharge. The worst-case total scour also occurred at the incipient-overtopping discharge. The incipient-overtopping discharge was in between the 100- and 500-year discharges. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Hollis, Geoff
2018-04-01
Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.
Fuzzy-Estimation Control for Improvement Microwave Connection for Iraq Electrical Grid
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Radi, Mohammed
2018-05-01
The demand for broadband wireless services is increasing day by day (as internet or radio broadcast and TV etc.) for this reason and optimal exploiting for this bandwidth may be other reasons indeed be there is problem in the communication channels. it’s necessary that exploiting the good part form this bandwidth. In this paper, we propose to use estimation technique for estimate channel availability in that moment and next one to know the error in the bandwidth channel for controlling the possibility data transferring through the channel. The proposed estimation based on the combination of the least Minimum square (LMS), Standard Kalman filter, and Modified Kalman filter. The error estimation in channel use as control parameter in fuzzy rules to adjusted the rate and size sending data through the network channel, and rearrangement the priorities of the buffered data (workstation control parameters, Texts, phone call, images, and camera video) for the worst cases of error in channel. The propose system is designed to management data communications through the channels connect among the Iraqi electrical grid stations. The proposed results show that the modified Kalman filter have a best result in time and noise estimation (0.1109 for 5% noise estimation to 0.3211 for 90% noise estimation) and the packets loss rate is reduced with ratio from (35% to 385%).
Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications
NASA Astrophysics Data System (ADS)
Choi, Jinseok; Evans, Brian L.; Gatherer, Alan
2017-12-01
In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.
Optimal Analyses for 3×n AB Games in the Worst Case
NASA Astrophysics Data System (ADS)
Huang, Li-Te; Lin, Shun-Shii
The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.
Lee, Wen-Jhy; Shih, Shun-I; Li, Hsing-Wang; Lin, Long-Full; Yu, Kuei-Min; Lu, Kueiwan; Wang, Lin-Chi; Chang-Chien, Guo-Ping; Fang, Kenneth; Lin, Mark
2009-04-30
Since the "Toxic Egg Event" broke out in central Taiwan, the possible sources of the high content of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in eggs have been a serious concern. In this study, the PCDD/F contents in different media (feed, soil and ambient air) were measured. Evaluation of the impact from electric arc furnace dust treatment plant (abbreviated as EAFDT plant), which is site-specific to the "Toxic Egg Event", on the duck total-PCDD/F daily intake was conducted by both Industrial Source Complex Short Term model (ISCST) and dry and wet deposition models. After different scenario simulations, the worst case was at farm A and at 200 g feed and 5 g soil for duck intake, and the highest PCDD/F contributions from the feed, original soil and stack flue gas were 44.92, 47.81, and 6.58%, respectively. Considering different uncertainty factors, such as the flow rate variation of stack flue gas and errors from modelling and measurement, the PCDD/F contribution fraction from the stack flue gas of EAFDT plant may increase up to twice as that for the worst case (6.58%) and become 13.2%, which was still much lower than that from the total contribution fraction (86.8%) of both feed and original soil. Fly ashes contained purposely in duck feed by the farmers was a potential major source for the duck daily intake. While the impact from EAFDT plant has been proven very minor, the PCDD/F content in the feed and soil, which was contaminated by illegal fly ash landfills, requires more attention.
CMSAF products Cloud Fraction Coverage and Cloud Type used for solar global irradiance estimation
NASA Astrophysics Data System (ADS)
Badescu, Viorel; Dumitrescu, Alexandru
2016-08-01
Two products provided by the climate monitoring satellite application facility (CMSAF) are the instantaneous Cloud Fractional Coverage (iCFC) and the instantaneous Cloud Type (iCTY) products. Previous studies based on the iCFC product show that the simple solar radiation models belonging to the cloudiness index class n CFC = 0.1-1.0 have rRMSE values ranging between 68 and 71 %. The products iCFC and iCTY are used here to develop simple models providing hourly estimates for solar global irradiance. Measurements performed at five weather stations of Romania (South-Eastern Europe) are used. Two three-class characterizations of the state-of-the-sky, based on the iCTY product, are defined. In case of the first new sky state classification, which is roughly related with cloud altitude, the solar radiation models proposed here perform worst for the iCTY class 4-15, with rRMSE values ranging between 46 and 57 %. The spreading error of the simple models is lower than that of the MAGIC model for the iCTY classes 1-4 and 15-19, but larger for iCTY classes 4-15. In case of the second new sky state classification, which takes into account in a weighted manner the chance for the sun to be covered by different types of clouds, the solar radiation models proposed here perform worst for the cloudiness index class n CTY = 0.7-0.1, with rRMSE values ranging between 51 and 66 %. Therefore, the two new sky state classifications based on the iCTY product are useful in increasing the accuracy of solar radiation models.
Stenemo, Fredrik; Jarvis, Nicholas
2007-09-01
A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry
How can health systems research reach the worst-off? A conceptual exploration.
Pratt, Bridget; Hyder, Adnan A
2016-11-15
Health systems research is increasingly being conducted in low and middle-income countries (LMICs). Such research should aim to reduce health disparities between and within countries as a matter of global justice. For such research to do so, ethical guidance that is consistent with egalitarian theories of social justice proposes it ought to (amongst other things) focus on worst-off countries and research populations. Yet who constitutes the worst-off is not well-defined. By applying existing work on disadvantage from political philosophy, the paper demonstrates that (at least) two options exist for how to define the worst-off upon whom equity-oriented health systems research should focus: those who are worst-off in terms of health or those who are systematically disadvantaged. The paper describes in detail how both concepts can be understood and what metrics can be relied upon to identify worst-off countries and research populations at the sub-national level (groups, communities). To demonstrate how each can be used, the paper considers two real-world cases of health systems research and whether their choice of country (Uganda, India) and research population in 2011 would have been classified as amongst the worst-off according to the proposed concepts. The two proposed concepts can classify different countries and sub-national populations as worst-off. It is recommended that health researchers (or other actors) should use the concept that best reflects their moral commitments-namely, to perform research focused on reducing health inequalities or systematic disadvantage more broadly. If addressing the latter, it is recommended that they rely on the multidimensional poverty approach rather than the income approach to identify worst-off populations.
Faith, Daniel P
2015-02-19
The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Haji Hosseinloo, Ashkan; Turitsyn, Konstantin
2016-04-01
Vibration energy harvesting has been shown as a promising power source for many small-scale applications mainly because of the considerable reduction in the energy consumption of the electronics and scalability issues of the conventional batteries. However, energy harvesters may not be as robust as the conventional batteries and their performance could drastically deteriorate in the presence of uncertainty in their parameters. Hence, study of uncertainty propagation and optimization under uncertainty is essential for proper and robust performance of harvesters in practice. While all studies have focused on expectation optimization, we propose a new and more practical optimization perspective; optimization for the worst-case (minimum) power. We formulate the problem in a generic fashion and as a simple example apply it to a linear piezoelectric energy harvester. We study the effect of parametric uncertainty in its natural frequency, load resistance, and electromechanical coupling coefficient on its worst-case power and then optimize for it under different confidence levels. The results show that there is a significant improvement in the worst-case power of thus designed harvester compared to that of a naively-optimized (deterministically-optimized) harvester.
Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.
Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133
Fendler, Wojciech; Hogendorf, Anna; Szadkowska, Agnieszka; Młynarski, Wojciech
2011-01-01
Self-monitoring of blood glucose (SMBG) is one of the cornerstones of diabetes management. To evaluate the potential for miscoding of a personal glucometer, to define a target population among pediatric patients with diabetes for a non-coding glucometer and the accuracy of the Contour TS non-coding system. Potential for miscoding during self-monitoring of blood glucose was evaluated by means of an anonymous questionnaire, with worst and best case scenarios evaluated depending on the responses pattern. Testing of the Contour TS system was performed according to guidelines set by the national committee for clinical laboratory standards. Estimated frequency of individuals prone to non-coding ranged from 68.21% (95% 60.70- 75.72%) to 7.95% (95%CI 3.86-12.31%) for the worse and best case scenarios respectively. Factors associated with increased likelihood of non-coding were: a smaller number of tests per day, a greater number of individuals involved in testing and self-testing by the patient with diabetes. The Contour TS device showed intra- and inter-assay accuracy -95%, linear association with laboratory measurements (R2=0.99, p <0.0001) and consistent, but small bias of -1.12% (95% Confidence Interval -3.27 to 1.02%). Clarke error grid analysis showed 4% of values within the benign error zone (B) with the other measurements yielding an acceptably accurate result (zone A). The Contour TS system showed sufficient accuracy to be safely used in monitoring of pediatric diabetic patients. Patients from families with a high throughput of test-strips or multiple individuals involved in SMBG using the same meter are candidates for clinical use of such devices due to an increased risk of calibration errors.
Uclés, S; Lozano, A; Sosa, A; Parrilla Vázquez, P; Valverde, A; Fernández-Alba, A R
2017-11-01
Gas and liquid chromatography coupled to triple quadrupole tandem mass spectrometry are currently the most powerful tools employed for the routine analysis of pesticide residues in food control laboratories. However, whatever the multiresidue extraction method, there will be a residual matrix effect making it difficult to identify/quantify some specific compounds in certain cases. Two main effects stand out: (i) co-elution with isobaric matrix interferents, which can be a major drawback for unequivocal identification, and therefore false negative detections, and (ii) signal suppression/enhancement, commonly called the "matrix effect", which may cause serious problems including inaccurate quantitation, low analyte detectability and increased method uncertainty. The aim of this analytical study is to provide a framework for evaluating the maximum expected errors associated with the matrix effects. The worst-case study contrived to give an estimation of the extreme errors caused by matrix effects when extraction/determination protocols are applied in routine multiresidue analysis. Twenty-five different blank matrices extracted with the four most common extraction methods used in routine analysis (citrate QuEChERS with/without PSA clean-up, ethyl acetate and the Dutch mini-Luke "NL" methods) were evaluated by both GC-QqQ-MS/MS and LC-QqQ-MS/MS. The results showed that the presence of matrix compounds with isobaric transitions to target pesticides was higher in GC than under LC in the experimental conditions tested. In a second study, the number of "potential" false negatives was evaluated. For that, ten matrices with higher percentages of natural interfering components were checked. Additionally, the results showed that for more than 90% of the cases, pesticide quantification was not affected by matrix-matched standard calibration when an interferent was kept constant along the calibration curve. The error in quantification depended on the concentration level. In a third study, the "matrix effect" was evaluated for each commodity/extraction method. Results showed 44% of cases with suppression/enhancement for LC and 93% of cases with enhancement for GC. Copyright © 2017 Elsevier B.V. All rights reserved.
Hellige, G
1976-01-01
The experimentally in vitro determined dynamic response characteristics of 38 catheter manometer systems were uniform in the worst case to 5 c.p.s. and optimally to 26 c.p.s. Accordingly, some systems are only satisfactory for ordinary pressure recording in cardiac rest, while better systems record dp/dt correct up to moderate inotropic stimulation of the heart. In the frequency range of uniform response (amplitude error less +/- 5%) the phase distortion is also negligible. In clinical application the investigator is often restricted to special type of cardiac catheter. In this case a low compliant transducer yields superior results. In all examined systems the combination with MSD 10 transducers is best, whereas the combination with P 23 Db transducers leads to minimal results. An inadequate system for recording ventricular pressure pulses leads in most cases to overestimations of dp/dtmax. The use of low frequency pass filters to attenuate higher frequency artefacts is, under clinical conditions, not suitable for extending the range of uniform frequency response. The dynamic response of 14 catheter manometer systems with two types of continuous self flush units was determined. The use of the P 37 flush unit in combination with small internal diameter catheters leads to serious error in ordinary pressure recording, due to amplitude distortion of the lower harmonics. The frequency response characteristics of the combination of an Intraflow flush system and MSD 10 transducer was similar to the non-flushing P 23 Db transducer feature.
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
NASA Astrophysics Data System (ADS)
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
NASA Technical Reports Server (NTRS)
Anderson, W. W.; Joshi, S. M.
1975-01-01
An annular suspension and pointing system consisting of pointing assemblies for coarse and vernier pointing is described. The first assembly is attached to a carrier spacecraft (e.g., the space shuttle) and consists of an azimuth gimbal and an elevation gimbal which provide 'coarse' pointing. The second or vernier pointing assembly is made up of magnetic actuators of suspension and fine pointing, roll motor segments, and an instrument or experiment mounting plate around which is attached a continuous annular rim similar to that used in the annular momentum control device. The rim provides appropriate magnetic circuits for the actuators and the roll motor segments for any instrument roll position. The results of a study to determine the pointing accuracy of the system in the presence of crew motion disturbances are presented. Typical 3 sigma worst-case errors are found to be of the order of 0.001 arc-second.
Shrawder, S; Lapin, G D; Allen, C V; Vick, N A; Groothuis, D R
1994-01-01
We designed a new head holder for immobilization and repositioning in dynamic CT studies of the brain. A customized thermoplastic face mask and foam head rest were made to restrict movement of the head in all directions, but particularly out of the axial plane (z-movement). This design provided a rigid, detailed mold of the face and back of the head that minimized motion during lengthy CT studies and enabled accurate repositioning of the head for follow-up studies. Markers applied directly to the skin were used to quantify z-movement. When tested on 12 subjects, immobilization was limited to < 2.0 mm under worst-case conditions when the subject was asked to attempt forced movements. Repositioning was accurate to < 1.5 mm when the subject was removed from the head holder and then placed back into it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel; Makarov, PNNL Yuri; Subbarao, PNNL Kris
RUT software is designed for use by the Balancing Authorities to predict and display additional requirements caused by the variability and uncertainty in load and generation. The prediction is made for the next operating hours as well as for the next day. The tool predicts possible deficiencies in generation capability and ramping capability. This deficiency of balancing resources can cause serious risks to power system stability and also impact real-time market energy prices. The tool dynamically and adaptively correlates changing system conditions with the additional balancing needs triggered by the interplay between forecasted and actual load and output of variablemore » resources. The assessment is performed using a specially developed probabilistic algorithm incorporating multiple sources of uncertainty including wind, solar and load forecast errors. The tool evaluates required generation for a worst case scenario, with a user-specified confidence level.« less
Analysis of on-orbit thermal characteristics of the 15-meter hoop/column antenna
NASA Technical Reports Server (NTRS)
Andersen, Gregory C.; Farmer, Jeffery T.; Garrison, James
1987-01-01
In recent years, interest in large deployable space antennae has led to the development of the 15 meter hoop/column antenna. The thermal environment the antenna is expected to experience during orbit is examined and the temperature distributions leading to reflector surface distortion errors are determined. Two flight orientations corresponding to: (1) normal operation, and (2) use in a Shuttle-attached flight experiment are examined. A reduced element model was used to determine element temperatures at 16 orbit points for both flight orientations. The temperature ranged from a minimum of 188 K to a maximum of 326 K. Based on the element temperatures, orbit position leading to possible worst case surface distortions were determined, and the subsequent temperatures were used in a static finite element analysis to quantify surface control cord deflections. The predicted changes in the control cord lengths were in the submillimeter ranges.
Improving AIRS Radiance Spectra in High Contrast Scenes Using MODIS
NASA Technical Reports Server (NTRS)
Pagano, Thomas S.; Aumann, Hartmut H.; Manning, Evan M.; Elliott, Denis A.; Broberg, Steven E.
2015-01-01
The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on May 4, 2002. AIRS acquires hyperspectral infrared radiances in 2378 channels ranging in wavelength from 3.7-15.4 microns with spectral resolution of better than 1200, and spatial resolution of 13.5 km with global daily coverage. The AIRS is designed to measure temperature and water vapor profiles for improvement in weather forecast accuracy and improved understanding of climate processes. As with most instruments, the AIRS Point Spread Functions (PSFs) are not the same for all detectors. When viewing a non-uniform scene, this causes a significant radiometric error in some channels that is scene dependent and cannot be removed without knowledge of the underlying scene. The magnitude of the error depends on the combination of non-uniformity of the AIRS spatial response for a given channel and the non-uniformity of the scene, but is typically only noticeable in about 1% of the scenes and about 10% of the channels. The current solution is to avoid those channels when performing geophysical retrievals. In this effort we use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument to provide information on the scene uniformity that is used to correct the AIRS data. For the vast majority of channels and footprints the technique works extremely well when compared to a Principal Component (PC) reconstruction of the AIRS channels. In some cases where the scene has high inhomogeneity in an irregular pattern, and in some channels, the method can actually degrade the spectrum. Most of the degraded channels appear to be slightly affected by random noise introduced in the process, but those with larger degradation may be affected by alignment errors in the AIRS relative to MODIS or uncertainties in the PSF. Despite these errors, the methodology shows the ability to correct AIRS radiances in non-uniform scenes under some of the worst case conditions and improves the ability to match AIRS and MODIS radiances in non-uniform scenes.
Quantum algorithm for association rules mining
NASA Astrophysics Data System (ADS)
Yu, Chao-Hua; Gao, Fei; Wang, Qing-Le; Wen, Qiao-Yan
2016-10-01
Association rules mining (ARM) is one of the most important problems in knowledge discovery and data mining. Given a transaction database that has a large number of transactions and items, the task of ARM is to acquire consumption habits of customers by discovering the relationships between itemsets (sets of items). In this paper, we address ARM in the quantum settings and propose a quantum algorithm for the key part of ARM, finding frequent itemsets from the candidate itemsets and acquiring their supports. Specifically, for the case in which there are Mf(k ) frequent k -itemsets in the Mc(k ) candidate k -itemsets (Mf(k )≤Mc(k ) ), our algorithm can efficiently mine these frequent k -itemsets and estimate their supports by using parallel amplitude estimation and amplitude amplification with complexity O (k/√{Mc(k )Mf(k ) } ɛ ) , where ɛ is the error for estimating the supports. Compared with the classical counterpart, i.e., the classical sampling-based algorithm, whose complexity is O (k/Mc(k ) ɛ2) , our quantum algorithm quadratically improves the dependence on both ɛ and Mc(k ) in the best case when Mf(k )≪Mc(k ) and on ɛ alone in the worst case when Mf(k )≈Mc(k ) .
Kuijpers, Laura Maria Francisca; Maltha, Jessica; Guiraud, Issa; Kaboré, Bérenger; Lompo, Palpouguini; Devlieger, Hugo; Van Geet, Chris; Tinto, Halidou; Jacobs, Jan
2016-06-02
Plasmodium falciparum infection may cause severe anaemia, particularly in children. When planning a diagnostic study on children suspected of severe malaria in sub-Saharan Africa, it was questioned how much blood could be safely sampled; intended blood volumes (blood cultures and EDTA blood) were 6 mL (children aged <6 years) and 10 mL (6-12 years). A previous review [Bull World Health Organ. 89: 46-53. 2011] recommended not to exceed 3.8 % of total blood volume (TBV). In a simulation exercise using data of children previously enrolled in a study about severe malaria and bacteraemia in Burkina Faso, the impact of this 3.8 % safety guideline was evaluated. For a total of 666 children aged >2 months to <12 years, data of age, weight and haemoglobin value (Hb) were available. For each child, the estimated TBV (TBVe) (mL) was calculated by multiplying the body weight (kg) by the factor 80 (ml/kg). Next, TBVe was corrected for the degree of anaemia to obtain the functional TBV (TBVf). The correction factor consisted of the rate 'Hb of the child divided by the reference Hb'; both the lowest ('best case') and highest ('worst case') reference Hb values were used. Next, the exact volume that a 3.8 % proportion of this TBVf would present was calculated and this volume was compared to the blood volumes that were intended to be sampled. When applied to the Burkina Faso cohort, the simulation exercise pointed out that in 5.3 % (best case) and 11.4 % (worst case) of children the blood volume intended to be sampled would exceed the volume as defined by the 3.8 % safety guideline. Highest proportions would be in the age groups 2-6 months (19.0 %; worst scenario) and 6 months-2 years (15.7 %; worst case scenario). A positive rapid diagnostic test for P. falciparum was associated with an increased risk of violating the safety guideline in the worst case scenario (p = 0.016). Blood sampling in children for research in P. falciparum endemic settings may easily violate the proposed safety guideline when applied to TBVf. Ethical committees and researchers should be wary of this and take appropriate precautions.
Discussions On Worst-Case Test Condition For Single Event Burnout
NASA Astrophysics Data System (ADS)
Liu, Sandra; Zafrani, Max; Sherman, Phillip
2011-10-01
This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.
Registration of 2D to 3D joint images using phase-based mutual information
NASA Astrophysics Data System (ADS)
Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul
2007-03-01
Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.
The association between semantic dementia and surface dyslexia in Japanese.
Fushimi, Takao; Komori, Kenjiro; Ikeda, Manabu; Lambon Ralph, Matthew A; Patterson, Karalyn
2009-03-01
One theory about reading suggests that producing the correct pronunciations of written words, particularly less familiar words with an atypical spelling-sound relationship, relies in part on knowledge of the word's meaning. This hypothesis has been supported by reports of surface dyslexia in large case-series studies of English-speaking/reading patients with semantic dementia (SD), but would have increased credibility if it applied to other languages and writing systems as well. The hypothesis predicts that, of the two systems used to write Japanese, SD patients should be unimpaired at oral reading of kana because of its invariant relationship between orthography and phonology. By contrast, oral reading of kanji should be impaired in a graded fashion depending on the consistency characteristics of the kanji target words, with worst performance on words whose component characters take 'minority' (atypical) pronunciations, especially if the words are of lower frequency. Errors in kanji reading should primarily reflect assignment of more typical readings to the component characters in these atypical words. In the largest-ever-reported case series of Japanese patients with semantic dementia, we tested and confirmed this hypothesis.
A novel N-input voting algorithm for X-by-wire fault-tolerant systems.
Karimi, Abbas; Zarafshan, Faraneh; Al-Haddad, S A R; Ramli, Abdul Rahman
2014-01-01
Voting is an important operation in multichannel computation paradigm and realization of ultrareliable and real-time control systems that arbitrates among the results of N redundant variants. These systems include N-modular redundant (NMR) hardware systems and diversely designed software systems based on N-version programming (NVP). Depending on the characteristics of the application and the type of selected voter, the voting algorithms can be implemented for either hardware or software systems. In this paper, a novel voting algorithm is introduced for real-time fault-tolerant control systems, appropriate for applications in which N is large. Then, its behavior has been software implemented in different scenarios of error-injection on the system inputs. The results of analyzed evaluations through plots and statistical computations have demonstrated that this novel algorithm does not have the limitations of some popular voting algorithms such as median and weighted; moreover, it is able to significantly increase the reliability and availability of the system in the best case to 2489.7% and 626.74%, respectively, and in the worst case to 3.84% and 1.55%, respectively.
Who Sits Where? Infrastructure-Free In-Vehicle Cooperative Positioning via Smartphones
He, Zongjian; Cao, Jiannong; Liu, Xuefeng; Tang, Shaojie
2014-01-01
Seat-level positioning of a smartphone in a vehicle can provide a fine-grained context for many interesting in-vehicle applications, including driver distraction prevention, driving behavior estimation, in-vehicle services customization, etc. However, most of the existing work on in-vehicle positioning relies on special infrastructures, such as the stereo, cigarette lighter adapter or OBD (on-board diagnostic) adapter. In this work, we propose iLoc, an infrastructure-free, in-vehicle, cooperative positioning system via smartphones. iLoc does not require any extra devices and uses only embedded sensors in smartphones to determine the phones' seat-level locations in a car. In iLoc, in-vehicle smartphones automatically collect data during certain kinds of events and cooperatively determine the relative left/right and front/back locations. In addition, iLoc is tolerant to noisy data and possible sensor errors. We evaluate the performance of iLoc using experiments conducted in real driving scenarios. Results show that the positioning accuracy can reach 90% in the majority of cases and around 70% even in the worst-cases. PMID:24984062
Bartnicki, Jerzy; Amundsen, Ingar; Brown, Justin; Hosseini, Ali; Hov, Øystein; Haakenstad, Hilde; Klein, Heiko; Lind, Ole Christian; Salbu, Brit; Szacinski Wendel, Cato C; Ytre-Eide, Martin Album
2016-01-01
The Russian nuclear submarine K-27 suffered a loss of coolant accident in 1968 and with nuclear fuel in both reactors it was scuttled in 1981 in the outer part of Stepovogo Bay located on the eastern coast of Novaya Zemlya. The inventory of spent nuclear fuel on board the submarine is of concern because it represents a potential source of radioactive contamination of the Kara Sea and a criticality accident with potential for long-range atmospheric transport of radioactive particles cannot be ruled out. To address these concerns and to provide a better basis for evaluating possible radiological impacts of potential releases in case a salvage operation is initiated, we assessed the atmospheric transport of radionuclides and deposition in Norway from a hypothetical criticality accident on board the K-27. To achieve this, a long term (33 years) meteorological database has been prepared and used for selection of the worst case meteorological scenarios for each of three selected locations of the potential accident. Next, the dispersion model SNAP was run with the source term for the worst-case accident scenario and selected meteorological scenarios. The results showed predictions to be very sensitive to the estimation of the source term for the worst-case accident and especially to the sizes and densities of released radioactive particles. The results indicated that a large area of Norway could be affected, but that the deposition in Northern Norway would be considerably higher than in other areas of the country. The simulations showed that deposition from the worst-case scenario of a hypothetical K-27 accident would be at least two orders of magnitude lower than the deposition observed in Norway following the Chernobyl accident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Oda, A.; Yamaotsu, N.; Hirono, S.; Takano, Y.; Fukuyoshi, S.; Nakagaki, R.; Takahashi, O.
2013-08-01
CAMDAS is a conformational search program, through which high temperature molecular dynamics (MD) calculations are carried out. In this study, the conformational search ability of CAMDAS was evaluated using structurally known 281 protein-ligand complexes as a test set. For the test, the influences of initial settings and initial conformations on search results were validated. By using the CAMDAS program, reasonable conformations whose root mean square deviations (RMSDs) in comparison with crystal structures were less than 2.0 Å could be obtained from 96% of the test set even though the worst initial settings were used. The success rate was comparable to those of OMEGA, and the errors of CAMDAS were less than those of OMEGA. Based on the results obtained using CAMDAS, the worst RMSD was around 2.5 Å, although the worst value obtained was around 4.0 Å using OMEGA. The results indicated that CAMDAS is a robust and versatile conformational search method and that it can be used for a wide variety of small molecules. In addition, the accuracy of a conformational search in relation to this study was improved by longer MD calculations and multiple MD simulations.
How well do CMIP5 models simulate the low-level jet in western Colombia?
NASA Astrophysics Data System (ADS)
Sierra, Juan P.; Arias, Paola A.; Vieira, Sara C.; Agudelo, Jhoana
2017-11-01
The Choco jet is an important atmospheric feature of Colombian and northern South America hydro-climatology. This work assesses the ability of 26 coupled and 11 uncoupled (AMIP) global climate models (GCMs) included in the fifth phase of the Coupled Model Intercomparison Project (CMIP5) archive to simulate the climatological basic features (annual cycle, spatial distribution and vertical structure) of this jet. Using factor and cluster analysis, we objectively classify models in Best, Worst, and Intermediate groups. Despite the coarse resolution of the GCMs, this study demonstrates that nearly all models can represent the existence of the Choco low-level jet. AMIP and Best models present a more realistic simulation of jet. Worst models exhibit biases such as an anomalous southward location of the Choco jet during the whole year and a shallower jet. The model skill to represent this jet comes from their ability to reproduce some of its main causes, such as the temperature and pressure differences between particular regions in the eastern Pacific and western Colombian lands, which are non-local features. Conversely, Worst models considerably underestimate temperature and pressure differences between these key regions. We identify a close relationship between the location of the Choco jet and the Inter-tropical Convergence Zone (ITCZ), and CMIP5 models are able to represent such relationship. Errors in Worst models are related with bias in the location of the ITCZ over the eastern tropical Pacific Ocean, as well as the representation of the topography and the horizontal resolution.
Busch, Martin H J; Vollmann, Wolfgang; Grönemeyer, Dietrich H W
2006-05-26
Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach (1/4) of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q V(ind) < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q V(ind) > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.
Busch, Martin HJ; Vollmann, Wolfgang; Grönemeyer, Dietrich HW
2006-01-01
Background Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach ¼ of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. Methods First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. Results The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. Conclusion The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q Vind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for. PMID:16729878
Colored noise effects on batch attitude accuracy estimates
NASA Technical Reports Server (NTRS)
Bilanow, Stephen
1991-01-01
The effects of colored noise on the accuracy of batch least squares parameter estimates with applications to attitude determination cases are investigated. The standard approaches used for estimating the accuracy of a computed attitude commonly assume uncorrelated (white) measurement noise, while in actual flight experience measurement noise often contains significant time correlations and thus is colored. For example, horizon scanner measurements from low Earth orbit were observed to show correlations over many minutes in response to large scale atmospheric phenomena. A general approach to the analysis of the effects of colored noise is investigated, and interpretation of the resulting equations provides insight into the effects of any particular noise color and the worst case noise coloring for any particular parameter estimate. It is shown that for certain cases, the effects of relatively short term correlations can be accommodated by a simple correction factor. The errors in the predicted accuracy assuming white noise and the reduced accuracy due to the suboptimal nature of estimators that do not take into account the noise color characteristics are discussed. The appearance of a variety of sample noise color characteristics are demonstrated through simulation, and their effects are discussed for sample estimation cases. Based on the analysis, options for dealing with the effects of colored noise are discussed.
Jow, Uei-Ming; Ghovanloo, Maysam
2012-12-21
We present a design methodology for an overlapping hexagonal planar spiral coil (hex-PSC) array, optimized for creation of a homogenous magnetic field for wireless power transmission to randomly moving objects. The modular hex-PSC array has been implemented in the form of three parallel conductive layers, for which an iterative optimization procedure defines the PSC geometries. Since the overlapping hex-PSCs in different layers have different characteristics, the worst case coil-coupling condition should be designed to provide the maximum power transfer efficiency (PTE) in order to minimize the spatial received power fluctuations. In the worst case, the transmitter (Tx) hex-PSC is overlapped by six PSCs and surrounded by six other adjacent PSCs. Using a receiver (Rx) coil, 20 mm in radius, at the coupling distance of 78 mm and maximum lateral misalignment of 49.1 mm (1/√3 of the PSC radius) we can receive power at a PTE of 19.6% from the worst case PSC. Furthermore, we have studied the effects of Rx coil tilting and concluded that the PTE degrades significantly when θ > 60°. Solutions are: 1) activating two adjacent overlapping hex-PSCs simultaneously with out-of-phase excitations to create horizontal magnetic flux and 2) inclusion of a small energy storage element in the Rx module to maintain power in the worst case scenarios. In order to verify the proposed design methodology, we have developed the EnerCage system, which aims to power up biological instruments attached to or implanted in freely behaving small animal subjects' bodies in long-term electrophysiology experiments within large experimental arenas.
2014-01-01
Background Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. Methods We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Results Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min −1 (0.3 min −1) and -0.7 bpm (1.7 bpm) (compared to -0.2 min −1 (0.4 min −1) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average total computational time needed for the Kalman filters is under 25% of the total signal length rendering it possible to perform the filtering in real-time. Conclusions It is possible to measure in real-time heart and breathing rates using an adaptive Kalman filter approach. Adapting the Kalman filter matrices improves the estimation results and makes the filter universally deployable when measuring cardiorespiratory signals. PMID:24886253
Foussier, Jerome; Teichmann, Daniel; Jia, Jing; Misgeld, Berno; Leonhardt, Steffen
2014-05-09
Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min(-1) (0.3 min(-1)) and -0.7 bpm (1.7 bpm) (compared to -0.2 min(-1) (0.4 min(-1)) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average total computational time needed for the Kalman filters is under 25% of the total signal length rendering it possible to perform the filtering in real-time. It is possible to measure in real-time heart and breathing rates using an adaptive Kalman filter approach. Adapting the Kalman filter matrices improves the estimation results and makes the filter universally deployable when measuring cardiorespiratory signals.
Failed State 2030: Nigeria - A Case Study
2011-02-01
disastrous ecological conditions in its Niger Delta region, and is fighting one of the modern world?s worst legacies of political and economic corruption. A ...world’s worst legacies of political and economic corruption. A nation with more than 350 ethnic groups, 250 languages, and three distinct religious...happening in the world. The discus- sion herein is a mix of cultural sociology, political science, econom - ics, military science (sometimes called
NASA Astrophysics Data System (ADS)
Ren, Xiaoqiang; Yan, Jiaqi; Mo, Yilin
2018-03-01
This paper studies binary hypothesis testing based on measurements from a set of sensors, a subset of which can be compromised by an attacker. The measurements from a compromised sensor can be manipulated arbitrarily by the adversary. The asymptotic exponential rate, with which the probability of error goes to zero, is adopted to indicate the detection performance of a detector. In practice, we expect the attack on sensors to be sporadic, and therefore the system may operate with all the sensors being benign for extended period of time. This motivates us to consider the trade-off between the detection performance of a detector, i.e., the probability of error, when the attacker is absent (defined as efficiency) and the worst-case detection performance when the attacker is present (defined as security). We first provide the fundamental limits of this trade-off, and then propose a detection strategy that achieves these limits. We then consider a special case, where there is no trade-off between security and efficiency. In other words, our detection strategy can achieve the maximal efficiency and the maximal security simultaneously. Two extensions of the secure hypothesis testing problem are also studied and fundamental limits and achievability results are provided: 1) a subset of sensors, namely "secure" sensors, are assumed to be equipped with better security countermeasures and hence are guaranteed to be benign, 2) detection performance with unknown number of compromised sensors. Numerical examples are given to illustrate the main results.
NASA Technical Reports Server (NTRS)
Avila, Arturo
2011-01-01
The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.
Optimizing Processes to Minimize Risk
NASA Technical Reports Server (NTRS)
Loyd, David
2017-01-01
NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.
2012-04-30
DoD SERC Aeronautics & Astronautics 5/16/2012 NPS 9th Annual Acquisition Research Symposium...0.6 0.7 0.8 0.9 1 0 60 120 180 240 300 360 420 480 540 600 Pr ob ab ili ty to c om pl et e a m is si on Time (mins) architecture 1 architecture 2...1 6 11 /1 6 12 /1 6 13 /1 6 14 /1 6 15 /1 6 1Pr ob ab ili ty to c om pl et e a m is si on % of system failures worst-case in arch1 worst-case in
Fine-Scale Structure Design for 3D Printing
NASA Astrophysics Data System (ADS)
Panetta, Francis Julian
Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Chou, M.-D.; Khairoutdinov, M.; Barker, H. W.; Cahalan, R. F.
2003-01-01
We test the performance of the shortwave (SW) and longwave (LW) Column Radiation Models (CORAMs) of Chou and collaborators with heterogeneous cloud fields from a global single-day dataset produced by NCAR's Community Atmospheric Model with a 2-D CRM installed in each gridbox. The original SW version of the CORAM performs quite well compared to reference Independent Column Approximation (ICA) calculations for boundary fluxes, largely due to the success of a combined overlap and cloud scaling parameterization scheme. The absolute magnitude of errors relative to ICA are even smaller for the LW CORAM which applies similar overlap. The vertical distribution of heating and cooling within the atmosphere is also simulated quite well with daily-averaged zonal errors always below 0.3 K/d for SW heating rates and 0.6 K/d for LW cooling rates. The SW CORAM's performance improves by introducing a scheme that accounts for cloud inhomogeneity. These results suggest that previous studies demonstrating the inaccuracy of plane-parallel models may have unfairly focused on worst scenario cases, and that current radiative transfer algorithms of General Circulation Models (GCMs) may be more capable than previously thought in estimating realistic spatial and temporal averages of radiative fluxes, as long as they are provided with correct mean cloud profiles. However, even if the errors of the particular CORAMs are small, they seem to be systematic, and the impact of the biases can be fully assessed only with GCM climate simulations.
Velocity measurement by vibro-acoustic Doppler.
Nabavizadeh, Alireza; Urban, Matthew W; Kinnick, Randall R; Fatemi, Mostafa
2012-04-01
We describe the theoretical principles of a new Doppler method, which uses the acoustic response of a moving object to a highly localized dynamic radiation force of the ultrasound field to calculate the velocity of the moving object according to Doppler frequency shift. This method, named vibro-acoustic Doppler (VAD), employs two ultrasound beams separated by a slight frequency difference, Δf, transmitting in an X-focal configuration. Both ultrasound beams experience a frequency shift because of the moving objects and their interaction at the joint focal zone produces an acoustic frequency shift occurring around the low-frequency (Δf) acoustic emission signal. The acoustic emission field resulting from the vibration of the moving object is detected and used to calculate its velocity. We report the formula that describes the relation between Doppler frequency shift of the emitted acoustic field and the velocity of the moving object. To verify the theory, we used a string phantom. We also tested our method by measuring fluid velocity in a tube. The results show that the error calculated for both string and fluid velocities is less than 9.1%. Our theory shows that in the worst case, the error is 0.54% for a 25° angle variation for the VAD method compared with an error of -82.6% for a 25° angle variation for a conventional continuous wave Doppler method. An advantage of this method is that, unlike conventional Doppler, it is not sensitive to angles between the ultrasound beams and direction of motion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahi-Anwar, M; Young, S; Lo, P
Purpose: A method to discriminate different types of renal cell carcinoma (RCC) was developed using attenuation values observed in multiphasic contrast-enhanced CT. This work evaluates the sensitivity of this RCC discrimination task at different CT radiation dose levels. Methods: We selected 5 cases of kidney lesion patients who had undergone four-phase CT scans covering the abdomen to the lilac crest. Through an IRB-approved study, the scans were conducted on 64-slice CT scanners (Definition AS/Definition Flash, Siemens Healthcare) using automatic tube-current modulation (TCM). The protocol included an initial baseline unenhanced scan, followed by three post-contrast injection phases. CTDIvol (32 cm phantom)more » measured between 9 to 35 mGy for any given phase. As a preliminary study, we limited the scope to the cortico-medullary phase—shown previously to be the most discriminative phase. A previously validated method was used to simulate a reduced dose acquisition via adding noise to raw CT sinogram data, emulating corresponding images at simulated doses of 50%, 25%, and 10%. To discriminate the lesion subtype, ROIs were placed in the most enhancing region of the lesion. The mean HU value of an ROI was extracted and used to discriminate to the worst-case RCC subtype, ranked in the order of clear cell, papillary, chromophobe and the benign oncocytoma. Results: Two patients exhibited a change of worst case RCC subtype between original and simulated scans, at 25% and 10% doses. In one case, the worst-case RCC subtype changed from oncocytoma to chromophobe at 10% and 25% doses, while the other case changed from oncocytoma to clear cell at 10% dose. Conclusion: Based on preliminary results from an initial cohort of 5 patients, worst-case RCC subtypes remained constant at all simulated dose levels except for 2 patients. Further study conducted on more patients will be needed to confirm our findings. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant Support from: U01 CA181156.« less
Williams, Camille K.; Tremblay, Luc; Carnahan, Heather
2016-01-01
Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937
Historical shoreline mapping (I): improving techniques and reducing positioning errors
Thieler, E. Robert; Danforth, William W.
1994-01-01
A critical need exists among coastal researchers and policy-makers for a precise method to obtain shoreline positions from historical maps and aerial photographs. A number of methods that vary widely in approach and accuracy have been developed to meet this need. None of the existing methods, however, address the entire range of cartographic and photogrammetric techniques required for accurate coastal mapping. Thus, their application to many typical shoreline mapping problems is limited. In addition, no shoreline mapping technique provides an adequate basis for quantifying the many errors inherent in shoreline mapping using maps and air photos. As a result, current assessments of errors in air photo mapping techniques generally (and falsely) assume that errors in shoreline positions are represented by the sum of a series of worst-case assumptions about digitizer operator resolution and ground control accuracy. These assessments also ignore altogether other errors that commonly approach ground distances of 10 m. This paper provides a conceptual and analytical framework for improved methods of extracting geographic data from maps and aerial photographs. We also present a new approach to shoreline mapping using air photos that revises and extends a number of photogrammetric techniques. These techniques include (1) developing spatially and temporally overlapping control networks for large groups of photos; (2) digitizing air photos for use in shoreline mapping; (3) preprocessing digitized photos to remove lens distortion and film deformation effects; (4) simultaneous aerotriangulation of large groups of spatially and temporally overlapping photos; and (5) using a single-ray intersection technique to determine geographic shoreline coordinates and express the horizontal and vertical error associated with a given digitized shoreline. As long as historical maps and air photos are used in studies of shoreline change, there will be a considerable amount of error (on the order of several meters) present in shoreline position and rate-of- change calculations. The techniques presented in this paper, however, provide a means to reduce and quantify these errors so that realistic assessments of the technological noise (as opposed to geological noise) in geographic shoreline positions can be made.
Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino
2017-03-01
Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Yang, Tsong-Shing; Chi, Ching-Chi; Wang, Shu-Hui; Lin, Jing-Chi; Lin, Ko-Ming
2016-10-01
Biologic therapies are more effective but more costly than conventional therapies in treating psoriatic arthritis. To evaluate the cost-efficacy of etanercept, adalimumab and golimumab therapies in treating active psoriatic arthritis in a Taiwanese setting. We conducted a meta-analysis of randomized placebo-controlled trials to calculate the incremental efficacy of etanercept, adalimumab and golimumab, respectively, in achieving Psoriatic Arthritis Response Criteria (PsARC) and a 20% improvement in the American College of Rheumatology score (ACR20). The base, best, and worst case incremental cost-effectiveness ratios (ICERs) for one subject to achieve PsARC and ACR20 were calculated. The annual ICER per PsARC responder were US$27 047 (best scenario US$16 619; worst scenario US$31 350), US$39 339 (best scenario US$31 846; worst scenario US$53 501) and US$27 085 (best scenario US$22 716; worst scenario US$33 534) for etanercept, adalimumab and golimumab, respectively. The annual ICER per ACR20 responder were US$27 588 (best scenario US$20 900; worst scenario US$41 800), US$39 339 (best scenario US$25 236; worst scenario US$83 595) and US$33 534 (best scenario US$27 616; worst scenario US$44 013) for etanercept, adalimumab and golimumab, respectively. In a Taiwanese setting, etanercept had the lowest annual costs per PsARC and ACR20 responder, while adalimumab had the highest annual costs per PsARC and ACR responder. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.
NASA Technical Reports Server (NTRS)
Hubbard, Dorthy (Technical Monitor); Lorenzini, E. C.; Shapiro, I. I.; Cosmo, M. L.; Ashenberg, J.; Parzianello, G.; Iafolla, V.; Nozzoli, S.
2003-01-01
We discuss specific, recent advances in the analysis of an experiment to test the Equivalence Principle (EP) in free fall. A differential accelerometer detector with two proof masses of different materials free falls inside an evacuated capsule previously released from a stratospheric balloon. The detector spins slowly about its horizontal axis during the fall. An EP violation signal (if present) will manifest itself at the rotational frequency of the detector. The detector operates in a quiet environment as it slowly moves with respect to the co-moving capsule. There are, however, gravitational and dynamical noise contributions that need to be evaluated in order to define key requirements for this experiment. Specifically, higher-order mass moments of the capsule contribute errors to the differential acceleration output with components at the spin frequency which need to be minimized. The dynamics of the free falling detector (in its present design) has been simulated in order to estimate the tolerable errors at release which, in turn, define the release mechanism requirements. Moreover, the study of the higher-order mass moments for a worst-case position of the detector package relative to the cryostat has led to the definition of requirements on the shape and size of the proof masses.
NASA Astrophysics Data System (ADS)
Wai Kuan, Yip; Teoh, Andrew B. J.; Ngo, David C. L.
2006-12-01
We introduce a novel method for secure computation of biometric hash on dynamic hand signatures using BioPhasor mixing and[InlineEquation not available: see fulltext.] discretization. The use of BioPhasor as the mixing process provides a one-way transformation that precludes exact recovery of the biometric vector from compromised hashes and stolen tokens. In addition, our user-specific[InlineEquation not available: see fulltext.] discretization acts both as an error correction step as well as a real-to-binary space converter. We also propose a new method of extracting compressed representation of dynamic hand signatures using discrete wavelet transform (DWT) and discrete fourier transform (DFT). Without the conventional use of dynamic time warping, the proposed method avoids storage of user's hand signature template. This is an important consideration for protecting the privacy of the biometric owner. Our results show that the proposed method could produce stable and distinguishable bit strings with equal error rates (EERs) of[InlineEquation not available: see fulltext.] and[InlineEquation not available: see fulltext.] for random and skilled forgeries for stolen token (worst case) scenario, and[InlineEquation not available: see fulltext.] for both forgeries in the genuine token (optimal) scenario.
A Worst-Case Approach for On-Line Flutter Prediction
NASA Technical Reports Server (NTRS)
Lind, Rick C.; Brenner, Martin J.
1998-01-01
Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.
Boehmler, Erick M.; Severance, Timothy
1997-01-01
Contraction scour for all modelled flows ranged from 3.8 to 6.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.0 to 6.7 ft. The worst-case abutment scour also occurred at the 500-year discharge. Pier scour ranged from 9.1 to 10.2. The worst-case pier scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Olson, Scott A.; Hammond, Robert E.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.9 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour at the left abutment ranged from 3.1 to 10.3 ft. with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 6.4 to 10.4 ft. with the worst-case occurring at the 100-year discharge.Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Medalie, Laura
1997-01-01
Contraction scour for the modelled flows ranged from 1.0 to 2.7 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour ranged from 8.4 to 17.6 ft. The worst-case abutment scour for the right abutment occurred at the incipient-overtopping discharge. For the left abutment, the worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, R.L.; Medalie, Laura
1998-01-01
Contraction scour for all modelled flows ranged from 0.0 to 2.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 6.7 to 8.7 ft. The worst-case left abutment scour occurred at the incipient roadway-overtopping discharge. Right abutment scour ranged from 7.8 to 9.5 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and Davis, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Jain, Dhruv; Tikku, Gargi; Bhadana, Pallavi; Dravid, Chandrashekhar; Grover, Rajesh Kumar
2017-08-01
We investigated World Health Organization (WHO) grading and pattern of invasion based histological schemes as independent predictors of disease-free survival, in oral squamous carcinoma patients. Tumor resection slides of eighty-seven oral squamous carcinoma patients [pTNM: I&II/III&IV-32/55] were evaluated. Besides examining various patterns of invasion, invasive front grade, predominant and worst (highest) WHO grade were recorded. For worst WHO grading, poor-undifferentiated component was estimated semi-quantitatively at advancing tumor edge (invasive growth front) in histology sections. Tumor recurrence was observed in 31 (35.6%) cases. The 2-year disease-free survival was 47% [Median: 656; follow-up: 14-1450] days. Using receiver operating characteristic curves, we defined poor-undifferentiated component exceeding 5% of tumor as the cutoff to assign an oral squamous carcinoma as grade-3, when following worst WHO grading. Kaplan-Meier curves for disease-free survival revealed prognostic association with nodal involvement, tumor size, worst WHO grading; most common pattern of invasion and invasive pattern grading score (sum of two most predominant patterns of invasion). In further multivariate analysis, tumor size (>2.5cm) and worst WHO grading (grade-3 tumors) independently predicted reduced disease-free survival [HR, 2.85; P=0.028 and HR, 3.37; P=0.031 respectively]. The inter-observer agreement was moderate for observers who semi-quantitatively estimated percentage of poor-undifferentiated morphology in oral squamous carcinomas. Our results support the value of semi-quantitative method to assign tumors as grade-3 with worst WHO grading for predicting reduced disease-free survival. Despite limitations, of the various histological tumor stratification schemes, WHO grading holds adjunctive value for its prognostic role, ease and universal familiarity. Copyright © 2017 Elsevier Inc. All rights reserved.
Integrity modelling of tropospheric delay models
NASA Astrophysics Data System (ADS)
Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó
2017-04-01
The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual error of tropospheric delays, the mathematical formulation of the overbounding models are currently under development. This study introduces the main findings of the residual error analysis of the studied tropospheric delay models, and discusses the preliminary analysis of the integrity model development for safety-of-life applications.
Isolator fragmentation and explosive initiation tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, Peter; Rae, Philip John; Foley, Timothy J.
2016-09-19
Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without a barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX 9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimatesmore » demonstrate that even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less
Isolator fragmentation and explosive initiation tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, Peter; Rae, Philip John; Foley, Timothy J.
2015-09-30
Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimates demonstrate thatmore » even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less
Including robustness in multi-criteria optimization for intensity-modulated proton therapy
NASA Astrophysics Data System (ADS)
Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David
2012-02-01
We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for each Pareto optimal plan takes less than 5 min on a standard computer, making a computationally friendly interface possible to the planner. In conclusion, the uncertainty pertinent to the IMPT procedure can be reduced during treatment planning by optimizing plans that emphasize different treatment objectives, including robustness, and then interactively seeking for a most-preferred one from the solution Pareto surface.
Sørensen, Peter B; Thomsen, Marianne; Assmuth, Timo; Grieger, Khara D; Baun, Anders
2010-08-15
This paper helps bridge the gap between scientists and other stakeholders in the areas of human and environmental risk management of chemicals and engineered nanomaterials. This connection is needed due to the evolution of stakeholder awareness and scientific progress related to human and environmental health which involves complex methodological demands on risk management. At the same time, the available scientific knowledge is also becoming more scattered across multiple scientific disciplines. Hence, the understanding of potentially risky situations is increasingly multifaceted, which again challenges risk assessors in terms of giving the 'right' relative priority to the multitude of contributing risk factors. A critical issue is therefore to develop procedures that can identify and evaluate worst case risk conditions which may be input to risk level predictions. Therefore, this paper suggests a conceptual modelling procedure that is able to define appropriate worst case conditions in complex risk management. The result of the analysis is an assembly of system models, denoted the Worst Case Definition (WCD) model, to set up and evaluate the conditions of multi-dimensional risk identification and risk quantification. The model can help optimize risk assessment planning by initial screening level analyses and guiding quantitative assessment in relation to knowledge needs for better decision support concerning environmental and human health protection or risk reduction. The WCD model facilitates the evaluation of fundamental uncertainty using knowledge mapping principles and techniques in a way that can improve a complete uncertainty analysis. Ultimately, the WCD is applicable for describing risk contributing factors in relation to many different types of risk management problems since it transparently and effectively handles assumptions and definitions and allows the integration of different forms of knowledge, thereby supporting the inclusion of multifaceted risk components in cumulative risk management. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
A volumetric pulmonary CT segmentation method with applications in emphysema assessment
NASA Astrophysics Data System (ADS)
Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.
2006-03-01
A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.
NASA Astrophysics Data System (ADS)
Leal-Junior, Arnaldo G.; Vargas-Valencia, Laura; dos Santos, Wilian M.; Schneider, Felipe B. A.; Siqueira, Adriano A. G.; Pontes, Maria José; Frizera, Anselmo
2018-07-01
This paper presents a low cost and highly reliable system for angle measurement based on a sensor fusion between inertial and fiber optic sensors. The system consists of the sensor fusion through Kalman filter of two inertial measurement units (IMUs) and an intensity variation-based polymer optical fiber (POF) curvature sensor. In addition, the IMU was applied as a reference for a compensation technique of POF curvature sensor hysteresis. The proposed system was applied on the knee angle measurement of a lower limb exoskeleton in flexion/extension cycles and in gait analysis. Results show the accuracy of the system, where the Root Mean Square Error (RMSE) between the POF-IMU sensor system and the encoder was below 4° in the worst case and about 1° in the best case. Then, the POF-IMU sensor system was evaluated as a wearable sensor for knee joint angle assessment without the exoskeleton, where its suitability for this purpose was demonstrated. The results obtained in this paper pave the way for future applications of sensor fusion between electronic and fiber optic sensors in movement analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messiaen, A., E-mail: a.messiaen@fz-juelich.de; Ongena, J.; Vervier, M.
2015-12-10
The paper analyses how the phasing of the ITER ICRH 24 strap array evolves from the power sources up to the strap currents of the antenna. The study of the phasing control and coherence through the feeding circuits with prematching and automatic matching and decoupling network is made by modeling starting from the TOPICA matrix of the antenna array for a low coupling plasma profile and for current drive phasing (worst case for mutual coupling effects). The main results of the analysis are: (i) the strap current amplitude is well controlled by the antinode V{sub max} amplitude of the feedingmore » lines, (ii) the best toroidal phasing control is done by the adjustment of the mean phase of V{sub max} of each poloidal straps column, (iii) with well adjusted system the largest strap current phasing error is ±20°, (iv) the effect on load resilience remains well below the maximum affordable VSWR of the generators, (v) the effect on the radiated power spectrum versus k{sub //} computed by means of the coupling code ANTITER II remains small for the considered cases.« less
Reply to "Comment on `Flow of wet granular materials: A numerical study' "
NASA Astrophysics Data System (ADS)
Khamseh, Saeed; Roux, Jean-Noël; Chevoir, François
2017-07-01
In his Comment on our paper [Phys. Rev. E 92, 022201 (2015), 10.1103/PhysRevE.92.022201], Chareyre criticizes, as inaccurate, the simple approach we adopted to explain the strong enhancement of the quasistatic shear strength of the material caused by capillary cohesion. He also observes that a similar form of the "effective stress" approach, accounting for the capillary shear stress, which we neglected, results in a quantitatively correct prediction of this yield stress. We agree with these remarks, which we deem quite relevant and valuable. We nevertheless point out that the initial approximation, despite ˜25 % errors on shear strength in the worst cases, provides a convenient estimate of the Mohr-Coulomb cohesion of the material, which is directly related to the coordination number. We argue that the effective stress assumption, despite its surprising success in the range of states explored in Khamseh et al. [Phys. Rev. E 92, 022201 (2015), 10.1103/PhysRevE.92.022201], is bound to fail in strongly cohesion-dominated material states.
NASA Technical Reports Server (NTRS)
Lind, Richard C. (Inventor); Brenner, Martin J.
2001-01-01
A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
Khor, Joo Moy; Tizzard, Andrew; Demosthenous, Andreas; Bayford, Richard
2014-06-01
Electrical impedance tomography (EIT) could be significantly advantageous to continuous monitoring of lung development in newborn and, in particular, preterm infants as it is non-invasive and safe to use within the intensive care unit. It has been demonstrated that accurate boundary form of the forward model is important to minimize artefacts in reconstructed electrical impedance images. This paper presents the outcomes of initial investigations for acquiring patient-specific thorax boundary information using a network of flexible sensors that imposes no restrictions on the patient's normal breathing and movements. The investigations include: (1) description of the basis of the reconstruction algorithms, (2) tests to determine a minimum number of bend sensors, (3) validation of two approaches to reconstruction and (4) an example of a commercially available bend sensor and its performance. Simulation results using ideal sensors show that, in the worst case, a total shape error of less than 6% with respect to its total perimeter can be achieved.
A System For Load Isolation And Precision Pointing
NASA Astrophysics Data System (ADS)
Keckler, Claude R.; Hamilton, Brian J.
1983-11-01
A system capable of satisfying the accuracy and stability requirements dictated by Shuttle-borne payloads utilizing large optics has been under joint NASA/Sperry development. This device, denoted the Annular Suspension and Pointing System, employs a unique combination of conventional gimbals and magnetic bearing actuators, thereby providing for the "complete" isolation of the payload from its external environment, as well as for extremely accurate and stable pointing (≍0.01 arcseconds). This effort has been pursued through the fabrication and laboratory evaluation of engineering model hardware. Results from these tests have been instrumental in generating high fidelity computer simulations of this load isolation and precision pointing system, and in permitting confident predictions of the system's on-orbit performance. Applicability of this system to the Solar Optical Telescope mission has been examined using the computer simulation. The worst case pointing error predicted for this payload while subjected to vernier reaction control system thruster firings and crew motions aboard Shuttle was approximately 0.006 arcseconds.
Managing risk in a challenging financial environment.
Kaufman, Kenneth
2008-08-01
Five strategies can help hospital financial leaders balance their organizations' financial and risk positions: Understand the hospital's financial condition; Determine the desired level of risk; Consider total risk; Use a portfolio approach; Explore best-case/worst-case scenarios to measure risk.
NASA Astrophysics Data System (ADS)
Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.
2016-06-01
This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.
Cumulative uncertainty in measured streamflow and water quality data for small watersheds
Harmel, R.D.; Cooper, R.J.; Slade, R.M.; Haney, R.L.; Arnold, J.G.
2006-01-01
The scientific community has not established an adequate understanding of the uncertainty inherent in measured water quality data, which is introduced by four procedural categories: streamflow measurement, sample collection, sample preservation/storage, and laboratory analysis. Although previous research has produced valuable information on relative differences in procedures within these categories, little information is available that compares the procedural categories or presents the cumulative uncertainty in resulting water quality data. As a result, quality control emphasis is often misdirected, and data uncertainty is typically either ignored or accounted for with an arbitrary margin of safety. Faced with the need for scientifically defensible estimates of data uncertainty to support water resource management, the objectives of this research were to: (1) compile selected published information on uncertainty related to measured streamflow and water quality data for small watersheds, (2) use a root mean square error propagation method to compare the uncertainty introduced by each procedural category, and (3) use the error propagation method to determine the cumulative probable uncertainty in measured streamflow, sediment, and nutrient data. Best case, typical, and worst case "data quality" scenarios were examined. Averaged across all constituents, the calculated cumulative probable uncertainty (??%) contributed under typical scenarios ranged from 6% to 19% for streamflow measurement, from 4% to 48% for sample collection, from 2% to 16% for sample preservation/storage, and from 5% to 21% for laboratory analysis. Under typical conditions, errors in storm loads ranged from 8% to 104% for dissolved nutrients, from 8% to 110% for total N and P, and from 7% to 53% for TSS. Results indicated that uncertainty can increase substantially under poor measurement conditions and limited quality control effort. This research provides introductory scientific estimates of uncertainty in measured water quality data. The results and procedures presented should also assist modelers in quantifying the "quality"of calibration and evaluation data sets, determining model accuracy goals, and evaluating model performance.
GPS, BDS and Galileo ionospheric correction models: An evaluation in range delay and position domain
NASA Astrophysics Data System (ADS)
Wang, Ningbo; Li, Zishen; Li, Min; Yuan, Yunbin; Huo, Xingliang
2018-05-01
The performance of GPS Klobuchar (GPSKlob), BDS Klobuchar (BDSKlob) and NeQuick Galileo (NeQuickG) ionospheric correction models are evaluated in the range delay and position domains over China. The post-processed Klobuchar-style (CODKlob) coefficients provided by the Center for Orbit Determination in Europe (CODE) and our own fitted NeQuick coefficients (NeQuickC) are also included for comparison. In the range delay domain, BDS total electrons contents (TEC) derived from 20 international GNSS Monitoring and Assessment System (iGMAS) stations and GPS TEC obtained from 35 Crust Movement Observation Network of China (CMONC) stations are used as references. Compared to BDS TEC during the short period (doy 010-020, 2015), GPSKlob, BDSKlob and NeQuickG can correct 58.4, 66.7 and 54.7% of the ionospheric delay. Compared to GPS TEC for the long period (doy 001-180, 2015), the three ionospheric models can mitigate the ionospheric delay by 64.8, 65.4 and 68.1%, respectively. For the two comparison cases, CODKlob shows the worst performance, which only reduces 57.9% of the ionospheric range errors. NeQuickC exhibits the best performance, which outperforms GPSKlob, BDSKlob and NeQuickG by 6.7, 2.1 and 6.9%, respectively. In the position domain, single-frequency stand point positioning (SPP) was conducted at the selected 35 CMONC sites using GPS C/A pseudorange with and without ionospheric corrections. The vertical position error of the uncorrected case drops significantly from 10.3 m to 4.8, 4.6, 4.4 and 4.2 m for GPSKlob, CODKlob, BDSKlob and NeQuickG, however, the horizontal position error (3.2) merely decreases to 3.1, 2.7, 2.4 and 2.3 m, respectively. NeQuickG outperforms GPSKlob and BDSKlob by 5.8 and 1.9% in vertical component, and by 25.0 and 3.2% in horizontal component.
A Fully Coupled Multi-Rigid-Body Fuel Slosh Dynamics Model Applied to the Triana Stack
NASA Technical Reports Server (NTRS)
London, K. W.
2001-01-01
A somewhat general multibody model is presented that accounts for energy dissipation associated with fuel slosh and which unifies some of the existing more specialized representations. This model is used to predict the mutation growth time constant for the Triana Spacecraft, or Stack, consisting of the Triana Observatory mated with the Gyroscopic Upper Stage of GUS (includes the solid rocket motor, SRM, booster). At the nominal spin rate of 60 rpm and with 145 kg of hydrazine propellant on board, a time constant of 116 s is predicted for worst case sloshing of a spherical slug model compared to 1,681 s (nominal), 1,043 s (worst case) for sloshing of a three degree of freedom pendulum model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Schild, S; Bues, M
Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Estimated cost of universal public coverage of prescription drugs in Canada
Morgan, Steven G.; Law, Michael; Daw, Jamie R.; Abraham, Liza; Martin, Danielle
2015-01-01
Background: With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. Methods: We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Results: Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. Interpretation: The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. PMID:25780047
Estimated cost of universal public coverage of prescription drugs in Canada.
Morgan, Steven G; Law, Michael; Daw, Jamie R; Abraham, Liza; Martin, Danielle
2015-04-21
With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. © 2015 Canadian Medical Association or its licensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, Henry
This research was mostly concerned with asymmetric vertical displacement event (AVDE) disruptions, which are the worst case scenario for producing a large asymmetric wall force. This is potentially a serious problem in ITER.
Multiple usage of the CD PLUS/UNIX system: performance in practice.
Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A
1995-01-01
In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable. PMID:8547902
Multiple usage of the CD PLUS/UNIX system: performance in practice.
Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A
1995-10-01
In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable.
Carter, D A; Hirst, I L
2000-01-07
This paper considers the application of one of the weighted risk indicators used by the Major Hazards Assessment Unit (MHAU) of the Health and Safety Executive (HSE) in formulating advice to local planning authorities on the siting of new major accident hazard installations. In such cases the primary consideration is to ensure that the proposed installation would not be incompatible with existing developments in the vicinity, as identified by the categorisation of the existing developments and the estimation of individual risk values at those developments. In addition a simple methodology, described here, based on MHAU's "Risk Integral" and a single "worst case" even analysis, is used to enable the societal risk aspects of the hazardous installation to be considered at an early stage of the proposal, and to determine the degree of analysis that will be necessary to enable HSE to give appropriate advice.
1984-10-26
test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the
40 CFR 90.119 - Certification procedure-testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... must select the duty cycle that will result in worst-case emission results for certification. For any... facility, in which case instrumentation and equipment specified by the Administrator must be made available... manufacturers may not use any equipment, instruments, or tools to identify malfunctioning, maladjusted, or...
Ivanoff, Michael A.
1997-01-01
Contraction scour for all modelled flows ranged from 2.1 to 4.2 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 14.3 to 14.4 ft. The worst-case left abutment scour occurred at the incipient roadwayovertopping and 500-year discharge. Right abutment scour ranged from 15.3 to 18.5 ft. The worst-case right abutment scour occurred at the 100-year and the incipient roadwayovertopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) give “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.
Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-09-18
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.
MP3 player listening sound pressure levels among 10 to 17 year old students.
Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M
2011-11-01
Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤ 75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System
Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-01-01
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019
Boehmler, Erick M.; Weber, Matthew A.
1997-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.3 ft. The worst-case contraction scour occurred at the incipient overtopping discharge, which was less than the 100-year discharge. Abutment scour ranged from 6.2 to 9.4 ft. The worst-case abutment scour for the right abutment was 9.4 feet at the 100-year discharge. The worst-case abutment scour for the left abutment was 8.6 feet at the incipient overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Burns, Ronda L.; Degnan, James R.
1997-01-01
Contraction scour for all modelled flows ranged from 2.6 to 4.6 ft. The worst-case contraction scour occurred at the incipient roadway-overtopping discharge. The left abutment scour ranged from 11.6 to 12.1 ft. The worst-case left abutment scour occurred at the incipient road-overtopping discharge. The right abutment scour ranged from 13.6 to 17.9 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
Bergschmidt, Philipp; Dammer, Rebecca; Zietz, Carmen; Finze, Susanne; Mittelmeier, Wolfram; Bader, Rainer
2016-06-01
Evaluation of the adhesive strength of femoral components to the bone cement is a relevant parameter for predicting implant safety. In the present experimental study, three types of cemented femoral components (metallic, ceramic and silica/silane-layered ceramic) of the bicondylar Multigen Plus knee system, implanted on composite femora were analysed. A pull-off test with the femoral components was performed after different load and several cementing conditions (four groups and n=3 components of each metallic, ceramic and silica/silane-layered ceramic in each group). Pull-off forces were comparable for the metallic and the silica/silane-layered ceramic femoral components (mean 4769 N and 4298 N) under standard test condition, whereas uncoated ceramic femoral components showed reduced pull-off forces (mean 2322 N). Loading under worst-case conditions led to decreased adhesive strength by loosening of the interface implant and bone cement using uncoated metallic and ceramic femoral components, respectively. Silica/silane-coated ceramic components were stably fixed even under worst-case conditions. Loading under high flexion angles can induce interfacial tensile stress, which could promote early implant loosening. In conclusion, a silica/silane-coating layer on the femoral component increased their adhesive strength to bone cement. Thicker cement mantles (>2 mm) reduce adhesive strength of the femoral component and can increase the risk of cement break-off.
Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome.
Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack
2016-05-10
Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an 'overall' or 'worst' GS in biopsies series should be used. Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Using both 'worst' and 'overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the 'worst' grade is a valid prognostic measure.
Reed-Solomon Codes and the Deep Hole Problem
NASA Astrophysics Data System (ADS)
Keti, Matt
In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.
NASA Astrophysics Data System (ADS)
Calabretta, N.; Cooman, I. A.; Stabile, R.
2018-04-01
We propose for the first time a coupling device concept for passive low-loss optical coupling, which is compatible with the ‘generic’ indium phosphide (InP) multi-project-wafer manufacturing. A low-to-high vertical refractive index contrast transition InP waveguide is designed and tapered down to adiabatically couple light into a top polymer waveguide. The on-chip embedded polymer waveguide is engineered at the chip facets for offering refractive-index and spot-size-matching to silica fiber-arrays. Numerical analysis shows that coupling losses lower than 1.5 dB can be achieved for a TE-polarized light between the InP waveguide and the on-chip embedded polymer waveguide at 1550 nm wavelength. The performance is mainly limited by the difficulty to control single-mode operation. However, coupling losses lower than 1.9 dB can be achieved for a bandwidth as large as 200 nm. Moreover, the foreseen fabrication process steps are indicated, which are compatible with the ‘generic’ InP multi-project-wafer manufacturing. A fabrication error tolerance study is performed, indicating that fabrication errors occur only in 0.25 dB worst case excess losses, as long as high precision lithography is used. The obtained results are promising and may open the route to large port counts and cheap packaging of InP-based photonic integrated chips.
Bond additivity corrections for quantum chemistry methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. F. Melius; M. D. Allendorf
1999-04-01
In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voort, Sebastian van der; Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft; Water, Steven van de
Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of themore » simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can be used in minimax robust optimization of IMPT treatment plans to ensure adequate CTV coverage for oropharyngeal cancer patients.« less
Stochastic Robust Mathematical Programming Model for Power System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
NASA Technical Reports Server (NTRS)
Coakley, P.; Kitterer, B.; Treadaway, M.
1982-01-01
Charging and discharging characteristics of dielectric samples exposed to 1-25 keV and 25-100 keV electrons in a laboratory environment are reported. The materials examined comprised OSR, Mylar, Kapton, perforated Kapton, and Alphaquartz, serving as models for materials employed on spacecraft in geosynchronous orbit. The tests were performed in a vacuum chamber with electron guns whose beams were rastered over the entire surface of the planar samples. The specimens were examined in low-impedance-grounded, high-impedance-grounded, and isolated configurations. The worst-case and average peak discharge currents were observed to be independent of the incident electron energy, the time-dependent changes in the worst case discharge peak current were independent of the energy, and predischarge surface potentials are negligibly dependent on incident monoenergetic electrons.
Worst-case space radiation environments for geocentric missions
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Seltzer, S. M.
1976-01-01
Worst-case possible annual radiation fluences of energetic charged particles in the terrestrial space environment, and the resultant depth-dose distributions in aluminum, were calculated in order to establish absolute upper limits to the radiation exposure of spacecraft in geocentric orbits. The results are a concise set of data intended to aid in the determination of the feasibility of a particular mission. The data may further serve as guidelines in the evaluation of standard spacecraft components. Calculations were performed for each significant particle species populating or visiting the magnetosphere, on the basis of volume occupied by or accessible to the respective species. Thus, magnetospheric space was divided into five distinct regions using the magnetic shell parameter L, which gives the approximate geocentric distance (in earth radii) of a field line's equatorial intersect.
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acikel, Volkan, E-mail: vacik@ee.bilkent.edu.tr; Atalar, Ergin; Uslubas, Ali
Purpose: The authors’ purpose is to model the case of an implantable pulse generator (IPG) and the electrode of an active implantable medical device using lumped circuit elements in order to analyze their effect on radio frequency induced tissue heating problem during a magnetic resonance imaging (MRI) examination. Methods: In this study, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the modified transmission line method (MoTLiM) and the method of moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected tomore » any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments were used. The induced currents on the leads with the IPG case or electrode connections were solved using the proposed models and the MoTLiM. These results were compared with the MoM simulations. In addition, an electrode was connected to a lead via an inductor. The dissipated power on the electrode was calculated using the MoTLiM by changing the inductance and the results were compared with the specific absorption rate results that were obtained using MoM. Then, MRI experiments were conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead was connected to the case and placed inside a uniform phantom. During a MRI scan, the temperature rise at the lead was measured by changing the lead length. The power at the lead tip for the same scenario was also calculated using the IPG case model and MoTLiM. Then, an electrode was connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan, the temperature rise at the electrode was measured by changing the inductance and compared with the dissipated power on the electrode resistance. Results: The induced currents on leads with the IPG case or electrode connection were solved for using the combination of the MoTLiM and the proposed lumped circuit models. These results were compared with those from the MoM simulations. The mean square error was less than 9%. During the MRI experiments, when the IPG case was introduced, the resonance lengths were calculated to have an error less than 13%. Also the change in tip temperature rise at resonance lengths was predicted with less than 4% error. For the electrode experiments, the value of the matching impedance was predicted with an error less than 1%. Conclusions: Electrical models for the IPG case and electrode are suggested, and the method is proposed to determine the parameter values. The concept of matching of the electrode to the lead is clarified using the defined electrode impedance and the lead Thevenin impedance. The effect of the IPG case and electrode on tip heating can be predicted using the proposed theory. With these models, understanding the tissue heating due to the implants becomes easier. Also, these models are beneficial for implant safety testers and designers. Using these models, worst case conditions can be determined and the corresponding implant test experiments can be planned.« less
Metric for evaluation of filter efficiency in spectral cameras.
Nahavandi, Alireza Mahmoudi; Tehran, Mohammad Amani
2016-11-10
Although metric functions that show the performance of a colorimetric imaging device have been investigated, a metric for performance analysis of a set of filters in wideband filter-based spectral cameras has rarely been studied. Based on a generalization of Vora's Measure of Goodness (MOG) and the spanning theorem, a single function metric that estimates the effectiveness of a filter set is introduced. The improved metric, named MMOG, varies between one, for a perfect, and zero, for the worst possible set of filters. Results showed that MMOG exhibits a trend that is more similar to the mean square of spectral reflectance reconstruction errors than does Vora's MOG index, and it is robust to noise in the imaging system. MMOG as a single metric could be exploited for further analysis of manufacturing errors.
Hepatitis Aand E Co-Infection with Worst Outcome.
Saeed, Anjum; Cheema, Huma Arshad; Assiri, Asaad
2016-06-01
Infections are still a major problem in the developing countries like Pakistan because of poor sewage disposal and economic restraints. Acute viral hepatitis like Aand E are not uncommon in pediatric age group because of unhygienic food handling and poor sewage disposal, but majority recovers well without any complications. Co-infections are rare occurrences and physicians need to be well aware while managing such conditions to avoid worst outcome. Co-infection with hepatitis Aand E is reported occasionally in the literature, however, other concurrent infections such as hepatitis A with Salmonellaand hepatotropic viruses like viral hepatitis B and C are present in the literature. Co-infections should be kept in consideration when someone presents with atypical symptoms or unusual disease course like this presented case. We report here a girl child who had acute hepatitis A and E concurrent infections and presented with hepatic encephalopathy and had worst outcome, despite all the supportive measures being taken.
Implementation of School Health Promotion: Consequences for Professional Assistance
ERIC Educational Resources Information Center
Boot, N. M. W. M.; de Vries, N. K.
2012-01-01
Purpose: This case study aimed to examine the factors influencing the implementation of health promotion (HP) policies and programs in secondary schools and the consequences for professional assistance. Design/methodology/approach: Group interviews were held in two schools that represented the best and worst case of implementation of a health…
Compression in the Superintendent Ranks
ERIC Educational Resources Information Center
Saron, Bradford G.; Birchbauer, Louis J.
2011-01-01
Sadly, the fiscal condition of school systems now not only is troublesome, but in some cases has surpassed all expectations for the worst-case scenario. Among the states, one common response is to drop funding for public education to inadequate levels, leading to permanent program cuts, school closures, staff layoffs, district dissolutions and…
Brain, Richard A; Teed, R Scott; Bang, JiSu; Thorbek, Pernille; Perine, Jeff; Peranginangin, Natalia; Kim, Myoungwoo; Valenti, Ted; Chen, Wenlin; Breton, Roger L; Rodney, Sara I; Moore, Dwayne R J
2015-01-01
Simple, deterministic screening-level assessments that are highly conservative by design facilitate a rapid initial screening to determine whether a pesticide active ingredient has the potential to adversely affect threatened or endangered species. If a worst-case estimate of pesticide exposure is below a very conservative effects metric (e.g., the no observed effects concentration of the most sensitive tested surrogate species) then the potential risks are considered de minimis and unlikely to jeopardize the existence of a threatened or endangered species. Thus by design, such compounded layers of conservatism are intended to minimize potential Type II errors (failure to reject a false null hypothesis of de minimus risk), but correspondingly increase Type I errors (falsely reject a null hypothesis of de minimus risk). Because of the conservatism inherent in screening-level risk assessments, higher-tier scientific information and analyses that provide additional environmental realism can be applied in cases where a potential risk has been identified. This information includes community-level effects data, environmental fate and exposure data, monitoring data, geospatial location and proximity data, species biology data, and probabilistic exposure and population models. Given that the definition of "risk" includes likelihood and magnitude of effect, higher-tier risk assessments should use probabilistic techniques that more accurately and realistically characterize risk. Moreover, where possible and appropriate, risk assessments should focus on effects at the population and community levels of organization rather than the more traditional focus on the organism level. This document provides a review of some types of higher-tier data and assessment refinements available to more accurately and realistically evaluate potential risks of pesticide use to threatened and endangered species. © 2014 SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qinghui; Chan, Maria F.; Burman, Chandra
2013-12-15
Purpose: Setting a proper margin is crucial for not only delivering the required radiation dose to a target volume, but also reducing the unnecessary radiation to the adjacent organs at risk. This study investigated the independent one-dimensional symmetric and asymmetric margins between the clinical target volume (CTV) and the planning target volume (PTV) for linac-based single-fraction frameless stereotactic radiosurgery (SRS).Methods: The authors assumed a Dirac delta function for the systematic error of a specific machine and a Gaussian function for the residual setup errors. Margin formulas were then derived in details to arrive at a suitable CTV-to-PTV margin for single-fractionmore » frameless SRS. Such a margin ensured that the CTV would receive the prescribed dose in 95% of the patients. To validate our margin formalism, the authors retrospectively analyzed nine patients who were previously treated with noncoplanar conformal beams. Cone-beam computed tomography (CBCT) was used in the patient setup. The isocenter shifts between the CBCT and linac were measured for a Varian Trilogy linear accelerator for three months. For each plan, the authors shifted the isocenter of the plan in each direction by ±3 mm simultaneously to simulate the worst setup scenario. Subsequently, the asymptotic behavior of the CTV V{sub 80%} for each patient was studied as the setup error approached the CTV-PTV margin.Results: The authors found that the proper margin for single-fraction frameless SRS cases with brain cancer was about 3 mm for the machine investigated in this study. The isocenter shifts between the CBCT and the linac remained almost constant over a period of three months for this specific machine. This confirmed our assumption that the machine systematic error distribution could be approximated as a delta function. This definition is especially relevant to a single-fraction treatment. The prescribed dose coverage for all the patients investigated was 96.1%± 5.5% with an extreme 3-mm setup error in all three directions simultaneously. It was found that the effect of the setup error on dose coverage was tumor location dependent. It mostly affected the tumors located in the posterior part of the brain, resulting in a minimum coverage of approximately 72%. This was entirely due to the unique geometry of the posterior head.Conclusions: Margin expansion formulas were derived for single-fraction frameless SRS such that the CTV would receive the prescribed dose in 95% of the patients treated for brain cancer. The margins defined in this study are machine-specific and account for nonzero mean systematic error. The margin for single-fraction SRS for a group of machines was also derived in this paper.« less
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2013 CFR
2013-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2012 CFR
2012-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
40 CFR 85.2115 - Notification of intent to certify.
Code of Federal Regulations, 2014 CFR
2014-07-01
... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...
Code of Federal Regulations, 2011 CFR
2011-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2012 CFR
2012-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2013 CFR
2013-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2014 CFR
2014-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Code of Federal Regulations, 2010 CFR
2010-01-01
... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...
Adaptive Attitude Control of the Crew Launch Vehicle
NASA Technical Reports Server (NTRS)
Muse, Jonathan
2010-01-01
An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.
NASA Astrophysics Data System (ADS)
Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.
2017-12-01
The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.
Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome
Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack
2016-01-01
Background: Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an ‘overall' or ‘worst' GS in biopsies series should be used. Methods: Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Results: Using both ‘worst' and ‘overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. Conclusions: This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the ‘worst' grade is a valid prognostic measure. PMID:27100731
Thermal, Structural, and Optical Analysis of a Balloon-Based Imaging System
NASA Astrophysics Data System (ADS)
Borden, Michael; Lewis, Derek; Ochoa, Hared; Jones-Wilson, Laura; Susca, Sara; Porter, Michael; Massey, Richard; Clark, Paul; Netterfield, Barth
2017-03-01
The Subarcsecond Telescope And BaLloon Experiment, STABLE, is the fine stage of a guidance system for a high-altitude ballooning platform designed to demonstrate subarcsecond pointing stability over one minute using relatively dim guide stars in the visible spectrum. The STABLE system uses an attitude rate sensor and the motion of the guide star on a detector to control a Fast Steering Mirror to stabilize the image. The characteristics of the thermal-optical-mechanical elements in the system directly affect the quality of the point-spread function of the guide star on the detector, so a series of thermal, structural, and optical models were built to simulate system performance and ultimately inform the final pointing stability predictions. This paper describes the modeling techniques employed in each of these subsystems. The results from those models are discussed in detail, highlighting the development of the worst-case cold and hot cases, the optical metrics generated from the finite element model, and the expected STABLE residual wavefront error and decenter. Finally, the paper concludes with the predicted sensitivities in the STABLE system, which show that thermal deadbanding, structural pre-loading, and self-deflection under different loading conditions, and the speed of individual optical elements were particularly important to the resulting STABLE optical performance.
NASA Astrophysics Data System (ADS)
van der Meeren, C.; Oksavik, K.; Moen, J. I.; Romano, V.
2013-12-01
For this study, GPS receiver scintillation and Total Electron Content (TEC) data from high-latitude locations on Svalbard have been combined with several other data sets, including the EISCAT Svalbard Radar (ESR) and allsky cameras, to perform a multi-instrument case study of high-latitude GPS ionospheric scintillations in relation to drifting plasma irregularities at night over Svalbard on 31 October 2011. Scintillations are rapid amplitude and phase fluctuations of electromagnetic signals. GNSS-based systems may be disturbed by ionospheric plasma irregularities and structures such as plasma patches (areas of enhanced electron density in the polar cap) and plasma gradients. When the GNSS radio signals propagate through such areas, in particular gradients, the signals experience scintillations that at best increases positioning errors and at worst may break the receiver's signal lock, potentially resulting in the GNSS receiver losing track of its position. Due to the importance of many GNSS applications, it is desirable to study the scintillation environment to understand the limitations of the GNSS systems. We find scintillation mainly localised to plasma gradients, with predominantly phase scintillation at the leading edge of patches and both phase and amplitude scintillation at the trailing edge. A single edge may also contain different scintillation types at different locations.
You can use this free software program to complete the Off-site Consequence Analyses (both worst case scenarios and alternative scenarios) required under the Risk Management Program rule, so that you don't have to do calculations by hand.
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
49 CFR 194.105 - Worst case discharge.
Code of Federal Regulations, 2011 CFR
2011-10-01
...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, J.V. III; Cramer, S.N.; Knight, J.R.
1980-09-01
Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less
NASA Technical Reports Server (NTRS)
Olson, S. L.
2004-01-01
NASA's current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.
NASA Technical Reports Server (NTRS)
Olson, S. L.
2004-01-01
NASA s current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1[1]). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.
LANDSAT-D MSS/TM tuned orbital jitter analysis model LDS900
NASA Technical Reports Server (NTRS)
Pollak, T. E.
1981-01-01
The final LANDSAT-D orbital dynamic math model (LSD900), comprised of all test validated substructures, was used to evaluate the jitter response of the MSS/TM experiments. A dynamic forced response analysis was performed at both the MSS and TM locations on all structural modes considered (thru 200 Hz). The analysis determined the roll angular response of the MSS/TM experiments to improve excitation generated by component operation. Cross axis and cross experiment responses were also calculated. The excitations were analytically represented by seven and nine term Fourier series approximations, for the MSS and TM experiment respectively, which enabled linear harmonic solution techniques to be applied to response calculations. Single worst case jitter was estimated by variations of the eigenvalue spectrum of model LSD 900. The probability of any worst case mode occurrence was investigated.
An Alaskan Theater Airlift Model.
1982-02-19
overt attack on American soil . In any case, such a reaotion represents the worst-case scenario In that theater forces would be denied the advantages of...NNSETNTAFE,SS(l06), USL (100), 7 TNET,THOV,1X(100) REAL A,CHKTIN INTEGER ORIC,DEST,ISCTMP,WXFLG,ALLW,T(RT,ZPTR,ZONE, * FTNFLG.WX,ZONLST(150) DATA ZNSI
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-15
... Service (NPS) for the Florida leafwing and the pine rockland ecosystem, in general. Sea Level Rise... habitat. In the best case scenario, which assumes low sea level rise, high financial resources, proactive... human population. In the worst case scenario, which assumes high sea level rise, low financial resources...
A Different Call to Arms: Women in the Core of the Communications Revolution.
ERIC Educational Resources Information Center
Rush, Ramona R.
A "best case" model for the role of women in the postindustrial communications era predicts positive leadership roles based on the preindustrial work characteristics of cooperation and consensus. A "worst case" model finds women entrepreneurs succumbing to the competitive male ethos and extracting the maximum amount of work…
Model Predictive Flight Control System with Full State Observer using H∞ Method
NASA Astrophysics Data System (ADS)
Sanwale, Jitu; Singh, Dhan Jeet
2018-03-01
This paper presents the application of the model predictive approach to design a flight control system (FCS) for longitudinal dynamics of a fixed wing aircraft. Longitudinal dynamics is derived for a conventional aircraft. Open loop aircraft response analysis is carried out. Simulation studies are illustrated to prove the efficacy of the proposed model predictive controller using H ∞ state observer. The estimation criterion used in the {H}_{∞} observer design is to minimize the worst possible effects of the modelling errors and additive noise on the parameter estimation.
Orbit covariance propagation via quadratic-order state transition matrix in curvilinear coordinates
NASA Astrophysics Data System (ADS)
Hernando-Ayuso, Javier; Bombardelli, Claudio
2017-09-01
In this paper, an analytical second-order state transition matrix (STM) for relative motion in curvilinear coordinates is presented and applied to the problem of orbit uncertainty propagation in nearly circular orbits (eccentricity smaller than 0.1). The matrix is obtained by linearization around a second-order analytical approximation of the relative motion recently proposed by one of the authors and can be seen as a second-order extension of the curvilinear Clohessy-Wiltshire (C-W) solution. The accuracy of the uncertainty propagation is assessed by comparison with numerical results based on Monte Carlo propagation of a high-fidelity model including geopotential and third-body perturbations. Results show that the proposed STM can greatly improve the accuracy of the predicted relative state: the average error is found to be at least one order of magnitude smaller compared to the curvilinear C-W solution. In addition, the effect of environmental perturbations on the uncertainty propagation is shown to be negligible up to several revolutions in the geostationary region and for a few revolutions in low Earth orbit in the worst case.
Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation
NASA Astrophysics Data System (ADS)
Choi, J.; Raguin, L. G.
2010-10-01
Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.
Bhattacharya, Sanghita; Nayak, Aniruddh; Goel, Vijay K; Warren, Chris; Schlaegle, Steve; Ferrara, Lisa
2010-01-01
Dynamic stabilization systems are emerging as an alternative to fusion instrumentation. However, cyclic loading and micro-motion at various interfaces may produce wear debris leading to adverse tissue reactions such as osteolysis. Ten million cycles of wear test was performed for PercuDyn™ in axial rotation and the wear profile and the wear rate was mapped. A validation study was undertaken to assess the efficiency of wear debris collection which accounted for experimental errors. The mean wear debris measured at the end of 10 million cycles was 4.01 mg, based on the worst-case recovery rate of 68.2%. Approximately 40% of the particulates were less than 5 μm; 92% less than 10 μm. About 43% of particulates were spherical in shape, 27% particulates were ellipsoidal and the remaining particles were of irregular shapes. The PercuDyn™ exhibited an average polymeric wear rate of 0.4 mg/million cycles; substantially less than the literature derived studies for other motion preservation devices like the Bryan disc and Charité disc. Wear debris size and shape were also similar to these devices.
Tolerance allocation for an electronic system using neural network/Monte Carlo approach
NASA Astrophysics Data System (ADS)
Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque
2001-12-01
The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.
Effect of exit beam phase aberrations on coherent x-ray reconstructions of Au nanocrystals
NASA Astrophysics Data System (ADS)
Hruszkewycz, Stephan; Harder, Ross; Fuoss, Paul
2010-03-01
Current studies in coherent x-ray diffractive imaging (CXDI) are focusing on in-situ imaging under a variety of environmental conditions. Such studies often involve environmental sample chambers through which the x-ray beam must pass before and after interacting with the sample: i.e. cryostats or high pressure cells. Such sample chambers usually contain polycrystalline x-ray windows with structural imperfections that can in turn interact with the diffracted beam. A phase object in the near field that interacts with the beam exiting the sample can introduce distortions at the detector plane that may affect coherent reconstructions. We investigate the effects of a thin beryllium membrane on the coherent exit beam of a gold nanoparticle. We compare three dimensional reconstructions from experimental diffraction patterns measured with and without a 380 micron thick Be dome and find that the reconstructions are reproducible within experimental errors. Simulated near-field distortions of the exit beam consistent with micron sized voids in Be establish a ``worst case scenario'' where distorted diffraction patterns inhibit accurate inversions.
Prato, S; La Valle, P; De Luca, E; Lattanzi, L; Migliore, G; Morgana, J G; Munari, C; Nicoletti, L; Izzo, G; Mistri, M
2014-03-15
The Water Framework Directive uses the "one-out, all-out" principle in assessing water bodies (i.e., the worst status of the elements used in the assessment determines the final status of the water body). In this study, we assessed the ecological status of two coastal lakes in Italy. Indices for all biological quality elements used in transitional waters from the Italian legislation and other European countries were employed and compared. Based on our analyses, the two lakes require restoration, despite the lush harbor seagrass beds, articulated macrobenthic communities and rich fish fauna. The "one-out, all-out" principle tends to inflate Type I errors, i.e., concludes that a water body is below the "good" status even if the water body actually has a "good" status. This may cause additional restoration costs where they are not necessarily needed. The results from this study strongly support the need for alternative approaches to the "one-out, all-out" principle. Copyright © 2014 Elsevier Ltd. All rights reserved.
RMP Guidance for Offsite Consequence Analysis
Offsite consequence analysis (OCA) consists of a worst-case release scenario and alternative release scenarios. OCA is required from facilities with chemicals above threshold quantities. RMP*Comp software can be used to perform calculations described here.
NASA Astrophysics Data System (ADS)
McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert
2016-08-01
Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20°C, representing the worst-case temperature gradient, the spatial rms wavefront error in units of wavelength is 0.178 (88.69 nm at λ = 500 nm).
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
A new compression format for fiber tracking datasets.
Presseau, Caroline; Jodoin, Pierre-Marc; Houde, Jean-Christophe; Descoteaux, Maxime
2015-04-01
A single diffusion MRI streamline fiber tracking dataset may contain hundreds of thousands, and often millions of streamlines and can take up to several gigabytes of memory. This amount of data is not only heavy to compute, but also difficult to visualize and hard to store on disk (especially when dealing with a collection of brains). These problems call for a fiber-specific compression format that simplifies its manipulation. As of today, no fiber compression format has yet been adopted and the need for it is now becoming an issue for future connectomics research. In this work, we propose a new compression format, .zfib, for streamline tractography datasets reconstructed from diffusion magnetic resonance imaging (dMRI). Tracts contain a large amount of redundant information and are relatively smooth. Hence, they are highly compressible. The proposed method is a processing pipeline containing a linearization, a quantization and an encoding step. Our pipeline is tested and validated under a wide range of DTI and HARDI tractography configurations (step size, streamline number, deterministic and probabilistic tracking) and compression options. Similar to JPEG, the user has one parameter to select: a worst-case maximum tolerance error in millimeter (mm). Overall, we find a compression factor of more than 96% for a maximum error of 0.1mm without any perceptual change or change of diffusion statistics (mean fractional anisotropy and mean diffusivity) along bundles. This opens new opportunities for connectomics and tractometry applications. Copyright © 2014 Elsevier Inc. All rights reserved.
Techniques for measurement of thoracoabdominal asynchrony
NASA Technical Reports Server (NTRS)
Prisk, G. Kim; Hammer, J.; Newth, Christopher J L.
2002-01-01
Respiratory motion measured by respiratory inductance plethysmography often deviates from the sinusoidal pattern assumed in the traditional Lissajous figure (loop) analysis used to determine thoraco-abdominal asynchrony, or phase angle phi. We investigated six different time-domain methods of measuring phi, using simulated data with sinusoidal and triangular waveforms, phase shifts of 0-135 degrees, and 10% noise. The techniques were then used on data from 11 lightly anesthetized rhesus monkeys (Macaca mulatta; 7.6 +/- 0.8 kg; 5.7 +/- 0.5 years old), instrumented with a respiratory inductive plethysmograph, and subjected to increasing levels of inspiratory resistive loading ranging from 5-1,000 cmH(2)O. L(-1). sec(-1).The best results were obtained from cross-correlation and maximum linear correlation, with errors less than approximately 5 degrees from the actual phase angle in the simulated data. The worst performance was produced by the loop analysis, which in some cases was in error by more than 30 degrees. Compared to correlation, other analysis techniques performed at an intermediate level. Maximum linear correlation and cross-correlation produced similar results on the data collected from monkeys (SD of the difference, 4.1 degrees ) but all other techniques had a high SD of the difference compared to the correlation techniques.We conclude that phase angles are best measured using cross-correlation or maximum linear correlation, techniques that are independent of waveform shape, and robust in the presence of noise. Copyright 2002 Wiley-Liss, Inc.
Planning Education for Regional Economic Integration: The Case of Paraguay and MERCOSUR.
ERIC Educational Resources Information Center
McGinn, Noel
This paper examines the possible impact of MERCOSUR on Paraguay's economic and educational systems. MERCOSUR is a trade agreement among Argentina, Brazil, Paraguay, and Uruguay, under which terms all import tariffs among the countries will be eliminated by 1994. The countries will enter into a common economic market. The worst-case scenario…
Asteroid Bennu Temperature Maps for OSIRIS-REx Spacecraft and Instrument Thermal Analyses
NASA Technical Reports Server (NTRS)
Choi, Michael K.; Emery, Josh; Delbo, Marco
2014-01-01
A thermophysical model has been developed to generate asteroid Bennu surface temperature maps for OSIRIS-REx spacecraft and instrument thermal design and analyses at the Critical Design Review (CDR). Two-dimensional temperature maps for worst hot and worst cold cases are used in Thermal Desktop to assure adequate thermal design margins. To minimize the complexity of the Bennu geometry in Thermal Desktop, it is modeled as a sphere instead of the radar shape. The post-CDR updated thermal inertia and a modified approach show that the new surface temperature predictions are more benign. Therefore the CDR Bennu surface temperature predictions are conservative.
Availability Simulation of AGT Systems
DOT National Transportation Integrated Search
1975-02-01
The report discusses the analytical and simulation procedures that were used to evaluate the effects of failure in a complex dual mode transportation system based on a worst case study-state condition. The computed results are an availability figure ...
Carbon monoxide screen for signalized intersections COSIM, version 3.0 : technical documentation.
DOT National Transportation Integrated Search
2008-07-01
The Illinois Department of Transportation (IDOT) currently uses the computer screening model Illinois : CO Screen for Intersection Modeling (COSIM) to estimate worst-case CO concentrations for proposed roadway : projects affecting signalized intersec...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
40 CFR 68.25 - Worst-case release scenario analysis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...
RMP Guidance for Warehouses - Chapter 4: Offsite Consequence Analysis
Offsite consequence analysis (OCA) informs government and the public about potential consequences of an accidental toxic or flammable chemical release at your facility, and consists of a worst-case release scenario and alternative release scenarios.
RMP Guidance for Chemical Distributors - Chapter 4: Offsite Consequence Analysis
How to perform the OCA for regulated substances, informing the government and the public about potential consequences of an accidental chemical release at your facility. Includes calculations for worst-case scenario, alternative scenarios, and endpoints.
... damage to the tissue and bone supporting the teeth. In the worst cases, you can lose teeth. In gingivitis, the gums become red and swollen. ... flossing and regular cleanings by a dentist or dental hygienist. Untreated gingivitis can lead to periodontitis. If ...
NASA Technical Reports Server (NTRS)
Bury, Kristen M.; Kerslake, Thomas W.
2008-01-01
NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.
Response of the North American corn belt to climate warming, CO2
NASA Astrophysics Data System (ADS)
1983-08-01
The climate of the North American corn belt was characterized to estimate the effects of climatic change on that agricultural region. Heat and moisture characteristics of the current corn belt were identified and mapped based on a simulated climate for a doubling of atmospheric CO2 concentrations. The result was a map of the projected corn belt corresponding to the simulated climatic change. Such projections were made with and without an allowance for earlier planting dates that could occur under a CO2-induced climatic warming. Because the direct effects of CO2 increases on plants, improvements in farm technology, and plant breeding are not considered, the resulting projections represent an extreme or worst case. The results indicate that even for such a worst case, climatic conditions favoring corn production would not extend very far into Canada. Climatic buffering effects of the Great Lakes would apparently retard northeastward shifts in corn-belt location.
Centaur Propellant Thermal Conditioning Study
NASA Technical Reports Server (NTRS)
Blatt, M. H.; Pleasant, R. L.; Erickson, R. C.
1976-01-01
A wicking investigation revealed that passive thermal conditioning was feasible and provided considerable weight advantage over active systems using throttled vent fluid in a Centaur D-1s launch vehicle. Experimental wicking correlations were obtained using empirical revisions to the analytical flow model. Thermal subcoolers were evaluated parametrically as a function of tank pressure and NPSP. Results showed that the RL10 category I engine was the best candidate for boost pump replacement and the option showing the lowest weight penalty employed passively cooled acquisition devices, thermal subcoolers, dry ducts between burns and pumping of subcooler coolant back into the tank. A mixing correlation was identified for sizing the thermodynamic vent system mixer. Worst case mixing requirements were determined by surveying Centaur D-1T, D-1S, IUS, and space tug vehicles. Vent system sizing was based upon worst case requirements. Thermodynamic vent system/mixer weights were determined for each vehicle.
Updated model assessment of pollution at major U. S. airports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamartino, R.J.; Rote, D.M.
1979-02-01
The air quality impact of aircraft at and around Los Angeles International Airport (LAX) was simulated for hours of peak aircraft operation and 'worst case' pollutant dispersion conditions by using an updated version of the Argonne Airport Vicinity Air Pollution model; field programs at LAX, O'Hara, and John F. Kennedy International Airports determined the 'worst case' conditions. Maximum carbon monoxide concentrations at LAX were low relative to National Ambient Air Quality Standards; relatively high and widespread hydrocarbon concentrations indicated that aircraft emissions may aggravate oxidant problems near the airport; nitrogen oxide concentrations were close to the levels set in proposedmore » standards. Data on typical time-in-mode for departing and arriving aircraft, the 8/4/77 diurnal variation in airport activity, and carbon monoxide concentration isopleths are given, and the update factors in the model are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundaram, Sriram; Grenat, Aaron; Naffziger, Samuel
Power management techniques can be effective at extracting more performance and energy efficiency out of mature systems on chip (SoCs). For instance, the peak performance of microprocessors is often limited by worst case technology (Vmax), infrastructure (thermal/electrical), and microprocessor usage assumptions. Performance/watt of microprocessors also typically suffers from guard bands associated with the test and binning processes as well as worst case aging/lifetime degradation. Similarly, on multicore processors, shared voltage rails tend to limit the peak performance achievable in low thread count workloads. In this paper, we describe five power management techniques that maximize the per-part performance under the before-mentionedmore » constraints. Using these techniques, we demonstrate a net performance increase of up to 15% depending on the application and TDP of the SoC, implemented on 'Bristol Ridge,' a 28-nm CMOS, dual-core x 86 accelerated processing unit.« less
VEGA Launch Vehicle Dynamic Environment: Flight Experience and Qualification Status
NASA Astrophysics Data System (ADS)
Di Trapani, C.; Fotino, D.; Mastrella, E.; Bartoccini, D.; Bonnet, M.
2014-06-01
VEGA Launch Vehicle (LV) during flight is equipped with more than 400 sensors (pressure transducers, accelerometers, microphones, strain gauges...) aimed to catch the physical phenomena occurring during the mission. Main objective of these sensors is to verify that the flight conditions are compliant with the launch vehicle and satellite qualification status and to characterize the phenomena that occur during flight. During VEGA development, several test campaigns have been performed in order to characterize its dynamic environment and identify the worst case conditions, but only with the flight data analysis is possible to confirm the worst cases identified and check the compliance of the operative life conditions with the components qualification status.Scope of the present paper is to show a comparison of the sinusoidal dynamic phenomena that occurred during VEGA first and second flight and give a summary of the launch vehicle qualification status.
NASA Astrophysics Data System (ADS)
Bury, Kristen M.; Kerslake, Thomas W.
2008-06-01
NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
Statistical analysis of QC data and estimation of fuel rod behaviour
NASA Astrophysics Data System (ADS)
Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.
1991-02-01
The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barragán, A. M., E-mail: ana.barragan@uclouvain.be; Differding, S.; Lee, J. A.
Purpose: To prove the ability of protons to reproduce a dose gradient that matches a dose painting by numbers (DPBN) prescription in the presence of setup and range errors, by using contours and structure-based optimization in a commercial treatment planning system. Methods: For two patients with head and neck cancer, voxel-by-voxel prescription to the target volume (GTV{sub PET}) was calculated from {sup 18}FDG-PET images and approximated with several discrete prescription subcontours. Treatments were planned with proton pencil beam scanning. In order to determine the optimal plan parameters to approach the DPBN prescription, the effects of the scanning pattern, number ofmore » fields, number of subcontours, and use of range shifter were separately tested on each patient. Different constant scanning grids (i.e., spot spacing = Δx = Δy = 3.5, 4, and 5 mm) and uniform energy layer separation [4 and 5 mm WED (water equivalent distance)] were analyzed versus a dynamic and automatic selection of the spots grid. The number of subcontours was increased from 3 to 11 while the number of beams was set to 3, 5, or 7. Conventional PTV-based and robust clinical target volumes (CTV)-based optimization strategies were considered and their robustness against range and setup errors assessed. Because of the nonuniform prescription, ensuring robustness for coverage of GTV{sub PET} inevitably leads to overdosing, which was compared for both optimization schemes. Results: The optimal number of subcontours ranged from 5 to 7 for both patients. All considered scanning grids achieved accurate dose painting (1% average difference between the prescribed and planned doses). PTV-based plans led to nonrobust target coverage while robust-optimized plans improved it considerably (differences between worst-case CTV dose and the clinical constraint was up to 3 Gy for PTV-based plans and did not exceed 1 Gy for robust CTV-based plans). Also, only 15% of the points in the GTV{sub PET} (worst case) were above 5% of DPBN prescription for robust-optimized plans, while they were more than 50% for PTV plans. Low dose to organs at risk (OARs) could be achieved for both PTV and robust-optimized plans. Conclusions: DPBN in proton therapy is feasible with the use of a sufficient number subcontours, automatically generated scanning patterns, and no more than three beams are needed. Robust optimization ensured the required target coverage and minimal overdosing, while PTV-approach led to nonrobust plans with excessive overdose. Low dose to OARs can be achieved even in the presence of a high-dose escalation as in DPBN.« less
ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?
La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim
2014-10-01
Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.
Learning Search Control Knowledge for Deep Space Network Scheduling
NASA Technical Reports Server (NTRS)
Gratch, Jonathan; Chien, Steve; DeJong, Gerald
1993-01-01
While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.
Availability Analysis of Dual Mode Systems
DOT National Transportation Integrated Search
1974-04-01
The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...
Part of a May 1999 series on the Risk Management Program Rule and issues related to chemical emergency management. Explains hazard versus risk, worst-case and alternative release scenarios, flammable endpoints and toxic endpoints.
General RMP Guidance - Chapter 4: Offsite Consequence Analysis
This chapter provides basic compliance information, not modeling methodologies, for people who plan to do their own air dispersion modeling. OCA is a required part of the risk management program, and involves worst-case and alternative release scenarios.
INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS
The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2011 CFR
2011-07-01
... limits of current technology, for the range of environmental conditions anticipated at your facility; and... Society for Testing of Materials (ASTM) publication F625-94, Standard Practice for Describing...
30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., materials, support vessels, and strategies listed are suitable, within the limits of current technology, for... equipment. Examples of acceptable terms include those defined in American Society for Testing of Materials...
Board Level Proton Testing Book of Knowledge for NASA Electronic Parts and Packaging Program
NASA Technical Reports Server (NTRS)
Guertin, Steven M.
2017-01-01
This book of knowledge (BoK) provides a critical review of the benefits and difficulties associated with using proton irradiation as a means of exploring the radiation hardness of commercial-off-the-shelf (COTS) systems. This work was developed for the NASA Electronic Parts and Packaging (NEPP) Board Level Testing for the COTS task. The fundamental findings of this BoK are the following. The board-level test method can reduce the worst case estimate for a board's single-event effect (SEE) sensitivity compared to the case of no test data, but only by a factor of ten. The estimated worst case rate of failure for untested boards is about 0.1 SEE/board-day. By employing the use of protons with energies near or above 200 MeV, this rate can be safely reduced to 0.01 SEE/board-day, with only those SEEs with deep charge collection mechanisms rising this high. For general SEEs, such as static random-access memory (SRAM) upsets, single-event transients (SETs), single-event gate ruptures (SEGRs), and similar cases where the relevant charge collection depth is less than 10 µm, the worst case rate for SEE is below 0.001 SEE/board-day. Note that these bounds assume that no SEEs are observed during testing. When SEEs are observed during testing, the board-level test method can establish a reliable event rate in some orbits, though all established rates will be at or above 0.001 SEE/board-day. The board-level test approach we explore has picked up support as a radiation hardness assurance technique over the last twenty years. The approach originally was used to provide a very limited verification of the suitability of low cost assemblies to be used in the very benign environment of the International Space Station (ISS), in limited reliability applications. Recently the method has been gaining popularity as a way to establish a minimum level of SEE performance of systems that require somewhat higher reliability performance than previous applications. This sort of application of the method suggests a critical analysis of the method is in order. This is also of current consideration because the primary facility used for this type of work, the Indiana University Cyclotron Facility (IUCF) (also known as the Integrated Science and Technology (ISAT) hall), has closed permanently, and the future selection of alternate test facilities is critically important. This document reviews the main theoretical work on proton testing of assemblies over the last twenty years. It augments this with review of reported data generated from the method and other data that applies to the limitations of the proton board-level test approach. When protons are incident on a system for test they can produce spallation reactions. From these reactions, secondary particles with linear energy transfers (LETs) significantly higher than the incident protons can be produced. These secondary particles, together with the protons, can simulate a subset of the space environment for particles capable of inducing single event effects (SEEs). The proton board-level test approach has been used to bound SEE rates, establishing a maximum possible SEE rate that a test article may exhibit in space. This bound is not particularly useful in many cases because the bound is quite loose. We discuss the established limit that the proton board-level test approach leaves us with. The remaining possible SEE rates may be as high as one per ten years for most devices. The situation is actually more problematic for many SEE types with deep charge collection. In cases with these SEEs, the limits set by the proton board-level test can be on the order of one per 100 days. Because of the limited nature of the bounds established by proton testing alone, it is possible that tested devices will have actual SEE sensitivity that is very low (e.g., fewer than one event in 1 × 10(exp 4) years), but the test method will only be able to establish the limits indicated above. This BoK further examines other benefits of proton board-level testing besides hardness assurance. The primary alternate use is the injection of errors. Error injection, or fault injection, is something that is often done in a simulation environment. But the proton beam has the benefit of injecting the majority of actual SEEs without risk of something being missed, and without the risk of simulation artifacts misleading the SEE investigation.
Characteristics of worst hour rainfall rate for radio wave propagation modelling in Nigeria
NASA Astrophysics Data System (ADS)
Osita, Ibe; Nymphas, E. F.
2017-10-01
Radio waves especially at the millimeter-wave band are known to be attenuated by rain. Radio engineers and designers need to be able to predict the time of the day when radio signal will be attenuated so as to provide measures to mitigate this effect. This is achieved by characterizing the rainfall intensity for a particular region of interest into worst month and worst hour of the day. This paper characterized rainfall in Nigeria into worst year, worst month, and worst hour. It is shown that for the period of study, 2008 and 2009 are the worst years, while September is the most frequent worst month in most of the stations. The evening time (LT) is the worst hours of the day in virtually all the stations.
Speech outcomes in Cantonese patients after glossectomy.
Wong, Ripley Kit; Poon, Esther Sok-Man; Woo, Cynthia Yuen-Man; Chan, Sabina Ching-Shun; Wong, Elsa Siu-Ping; Chu, Ada Wai-Sze
2007-08-01
We sought to determine the major factors affecting speech production of Cantonese-speaking glossectomized patients. Error pattern was analyzed. Forty-one Cantonese-speaking subjects who had undergone glossectomy > or = 6 months previously were recruited. Speech production evaluation included (1) phonetic error analysis in nonsense syllable; (2) speech intelligibility in sentences evaluated by naive listeners; (3) overall speech intelligibility in conversation evaluated by experienced speech therapists. Patients receiving adjuvant radiotherapy had significantly poorer segmental and connected speech production. Total or subtotal glossectomy also resulted in poor speech outcomes. Patients having free flap reconstruction showed the best speech outcomes. Patients without lymph node metastasis had significantly better speech scores when compared with patients with lymph node metastasis. Initial consonant production had the worst scores, while vowel production was the least affected. Speech outcomes of Cantonese-speaking glossectomized patients depended on the severity of the disease. Initial consonants had the greatest effect on speech intelligibility.
Stressful life events and catechol-O-methyl-transferase (COMT) gene in bipolar disorder.
Hosang, Georgina M; Fisher, Helen L; Cohen-Woods, Sarah; McGuffin, Peter; Farmer, Anne E
2017-05-01
A small body of research suggests that gene-environment interactions play an important role in the development of bipolar disorder. The aim of the present study is to contribute to this work by exploring the relationship between stressful life events and the catechol-O-methyl-transferase (COMT) Val 158 Met polymorphism in bipolar disorder. Four hundred eighty-two bipolar cases and 205 psychiatrically healthy controls completed the List of Threatening Experiences Questionnaire. Bipolar cases reported the events experienced 6 months before their worst depressive and manic episodes; controls reported those events experienced 6 months prior to their interview. The genotypic information for the COMT Val 158 Met variant (rs4680) was extracted from GWAS analysis of the sample. The impact of stressful life events was moderated by the COMT genotype for the worst depressive episode using a Val dominant model (adjusted risk difference = 0.09, 95% confidence intervals = 0.003-0.18, P = .04). For the worst manic episodes no significant interactions between COMT and stressful life events were detected. This is the first study to explore the relationship between stressful life events and the COMT Val 158 Met polymorphism focusing solely on bipolar disorder. The results of this study highlight the importance of the interplay between genetic and environmental factors for bipolar depression. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Zirkel, Sabrina; Pollack, Terry M.
2016-01-01
We present a case analysis of the controversy and public debate generated from a school district's efforts to address racial inequities in educational outcomes by diverting special funds from the highest performing students seeking elite college admissions to the lowest performing students who were struggling to graduate from high school.…
2008-03-01
Adversarial Tripolarity ................................................................................... VII-1 VIII. Fallen Nuclear Dominoes...power dimension, it is possible to imagine a best case (deep concert) and a worst case (adversarial tripolarity ) and some less extreme outcomes, one...vanquished and the sub-regions have settled into relative stability). 5. Adversarial U.S.-Russia-China tripolarity : In this world, the regional
ERIC Educational Resources Information Center
Marginson, Simon
This study examined the character of the emerging systems of corporate management in Australian universities and their effects on academic and administrative practices, focusing on relations of power. Case studies were conducted at 17 individual universities of various types. In each institution, interviews were conducted with senior…
Elementary Social Studies in 2005: Danger or Opportunity?--A Response to Jeff Passe
ERIC Educational Resources Information Center
Libresco, Andrea S.
2006-01-01
From the emphasis on lower-level test-prep materials to the disappearance of the subject altogether, elementary social studies is, in the best case scenario, being tested and, thus, taught with a heavy emphasis on recall; and, in the worst-case scenario, not being taught at all. In this article, the author responds to Jeff Passe's views on…
Thermal Analysis of a Metallic Wing Glove for a Mach-8 Boundary-Layer Experiment
NASA Technical Reports Server (NTRS)
Gong, Leslie; Richards, W. Lance
1998-01-01
A metallic 'glove' structure has been built and attached to the wing of the Pegasus(trademark) space booster. An experiment on the upper surface of the glove has been designed to help validate boundary-layer stability codes in a free-flight environment. Three-dimensional thermal analyses have been performed to ensure that the glove structure design would be within allowable temperature limits in the experiment test section of the upper skin of the glove. Temperature results obtained from the design-case analysis show a peak temperature at the leading edge of 490 F. For the upper surface of the glove, approximately 3 in. back from the leading edge, temperature calculations indicate transition occurs at approximately 45 sec into the flight profile. A worst-case heating analysis has also been performed to ensure that the glove structure would not have any detrimental effects on the primary objective of the Pegasus a launch. A peak temperature of 805 F has been calculated on the leading edge of the glove structure. The temperatures predicted from the design case are well within the temperature limits of the glove structure, and the worst-case heating analysis temperature results are acceptable for the mission objectives.
Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect
NASA Astrophysics Data System (ADS)
Razavi, Rouzbeh; Fleury, Martin; Ghanbari, Mohammed
2008-12-01
Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC) of automatic repeat request (ARQ) as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.
Cavalcante, Milady Cutrim Vieira; Lamy, Zeni Carvalho; Lamy Filho, Fernando; França, Ana Karina Teixeira da Cunha; dos Santos, Alcione Miranda; Thomaz, Erika Bárbara Abreu Fonseca; da Silva, Antonio Augusto Moura; Salgado Filho, Natalino
2013-01-01
There is a known association between low scores for quality of life (QOL) and higher rates of hospitalization, mortality in hemodialysis vascular access catheter, older age, lack of regular occupation, presence of comorbidities and hypoalbuminemia. There is still no agreement about the influence of sex, educational level, socioeconomic status and treatment time on the worst levels of QOL. Identify socioeconomic, demographic, clinical, nutritional and laboratory factors associated with worse QOL in adults undergoing hemodialysis in Sao Luís, Maranhão, Brazil. A cross-sectional study which evaluated the QOL of patients with chronic renal disease, aged 20-59 years, undergoing hemodialysis. Two instruments were used: the Kidney Disease Quality of Life -Short Form 1.3 (KDQOL-SF™ 1.3) and a questionnaire on socioeconomic, demographic, clinical, nutritional and laboratory data. The reliability of KDQOL-SF™ 1.3 was assessed by Cronbach's alpha. For the multivariable analysis a Poisson regression model with robust adjustment of the standard error was used. The reliability assessment of KDQOL-SF™ 1.3 showed a Cronbach's alpha test greater than 0.8 in all areas. The areas with the worst levels of QOL were "work situation", "burden of kidney disease", "patient satisfaction", "physical function" and "general health". Having less than 8 years of schooling, coming from the countryside and having cardiovascular disease were associated to the areas with the worst levels of QOL. KDQOL-SF™ 1.3 is a reliable instrument to measure quality of life of hemodialysis patients. Demographic and clinical conditions can negatively influence QOL in chronic renal failure patients.
Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2016-08-01
We report on a comprehensive all-sky search for periodic gravitational waves in the frequency band 100-1500 Hz and with a frequency time derivative in the range of [-1.18 ,+1.00 ] ×1 0-8 Hz /s . Such a signal could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our galaxy. This search uses the data from the initial LIGO sixth science run and covers a larger parameter space with respect to any past search. A Loosely Coherent detection pipeline was applied to follow up weak outliers in both Gaussian (95% recovery rate) and non-Gaussian (75% recovery rate) bands. No gravitational wave signals were observed, and upper limits were placed on their strength. Our smallest upper limit on worst-case (linearly polarized) strain amplitude h0 is 9.7 ×1 0-25 near 169 Hz, while at the high end of our frequency range we achieve a worst-case upper limit of 5.5 ×1 0-24 . Both cases refer to all sky locations and entire range of frequency derivative values.
Zika virus in French Polynesia 2013-14: anatomy of a completed outbreak.
Musso, Didier; Bossin, Hervé; Mallet, Henri Pierre; Besnard, Marianne; Broult, Julien; Baudouin, Laure; Levi, José Eduardo; Sabino, Ester C; Ghawche, Frederic; Lanteri, Marion C; Baud, David
2018-05-01
The Zika virus crisis exemplified the risk associated with emerging pathogens and was a reminder that preparedness for the worst-case scenario, although challenging, is needed. Herein, we review all data reported during the unexpected emergence of Zika virus in French Polynesia in late 2013. We focus on the new findings reported during this outbreak, especially the first description of severe neurological complications in adults and the retrospective description of CNS malformations in neonates, the isolation of Zika virus in semen, the potential for blood-transfusion transmission, mother-to-child transmission, and the development of new diagnostic assays. We describe the effect of this outbreak on health systems, the implementation of vector-borne control strategies, and the line of communication used to alert the international community of the new risk associated with Zika virus. This outbreak highlighted the need for careful monitoring of all unexpected events that occur during an emergence, to implement surveillance and research programmes in parallel to management of cases, and to be prepared to the worst-case scenario. Copyright © 2018 Elsevier Ltd. All rights reserved.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.
DOT National Transportation Integrated Search
2013-06-01
Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...
Global climate change: The quantifiable sustainability challenge
Population growth and the pressures spawned by increasing demands for energy and resource-intensive goods, foods and services are driving unsustainable growth in greenhouse gas (GHG) emissions. Recent GHG emission trends are consistent with worst-case scenarios of the previous de...
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
On optimal current patterns for electrical impedance tomography.
Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D
2005-02-01
We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.
The Locus analytical framework for indoor localization and tracking applications
NASA Astrophysics Data System (ADS)
Segou, Olga E.; Thomopoulos, Stelios C. A.
2015-05-01
Obtaining location information can be of paramount importance in the context of pervasive and context-aware computing applications. Many systems have been proposed to date, e.g. GPS that has been proven to offer satisfying results in outdoor areas. The increased effect of large and small scale fading in indoor environments, however, makes localization a challenge. This is particularly reflected in the multitude of different systems that have been proposed in the context of indoor localization (e.g. RADAR, Cricket etc). The performance of such systems is often validated on vastly different test beds and conditions, making performance comparisons difficult and often irrelevant. The Locus analytical framework incorporates algorithms from multiple disciplines such as channel modeling, non-uniform random number generation, computational geometry, localization, tracking and probabilistic modeling etc. in order to provide: (a) fast and accurate signal propagation simulation, (b) fast experimentation with localization and tracking algorithms and (c) an in-depth analysis methodology for estimating the performance limits of any Received Signal Strength localization system. Simulation results for the well-known Fingerprinting and Trilateration algorithms are herein presented and validated with experimental data collected in real conditions using IEEE 802.15.4 ZigBee modules. The analysis shows that the Locus framework accurately predicts the underlying distribution of the localization error and produces further estimates of the system's performance limitations (in a best-case/worst-case scenario basis).
Hybrid MRI-Ultrasound acquisitions, and scannerless real-time imaging.
Preiswerk, Frank; Toews, Matthew; Cheng, Cheng-Chieh; Chiou, Jr-Yuan George; Mei, Chang-Sheng; Schaefer, Lena F; Hoge, W Scott; Schwartz, Benjamin M; Panych, Lawrence P; Madore, Bruno
2017-09-01
To combine MRI, ultrasound, and computer science methodologies toward generating MRI contrast at the high frame rates of ultrasound, inside and even outside the MRI bore. A small transducer, held onto the abdomen with an adhesive bandage, collected ultrasound signals during MRI. Based on these ultrasound signals and their correlations with MRI, a machine-learning algorithm created synthetic MR images at frame rates up to 100 per second. In one particular implementation, volunteers were taken out of the MRI bore with the ultrasound sensor still in place, and MR images were generated on the basis of ultrasound signal and learned correlations alone in a "scannerless" manner. Hybrid ultrasound-MRI data were acquired in eight separate imaging sessions. Locations of liver features, in synthetic images, were compared with those from acquired images: The mean error was 1.0 pixel (2.1 mm), with best case 0.4 and worst case 4.1 pixels (in the presence of heavy coughing). For results from outside the bore, qualitative validation involved optically tracked ultrasound imaging with/without coughing. The proposed setup can generate an accurate stream of high-speed MR images, up to 100 frames per second, inside or even outside the MR bore. Magn Reson Med 78:897-908, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Programmable Logic Application Notes
NASA Technical Reports Server (NTRS)
Katz, Richard
2000-01-01
This column will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will start a series of notes concentrating on analysis techniques with this issues section discussing worst-case analysis requirements.
Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett
2018-06-20
Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.
Selective robust optimization: A new intensity-modulated proton therapy optimization strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yupeng; Niemela, Perttu; Siljamaki, Sami
2015-08-15
Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less
Dimitroulopoulou, C; Lucica, E; Johnson, A; Ashmore, M R; Sakellaris, I; Stranger, M; Goelen, E
2015-12-01
Consumer products are frequently and regularly used in the domestic environment. Realistic estimates for product use are required for exposure modelling and health risk assessment. This paper provides significant data that can be used as input for such modelling studies. A European survey was conducted, within the framework of the DG Sanco-funded EPHECT project, on the household use of 15 consumer products. These products are all-purpose cleaners, kitchen cleaners, floor cleaners, glass and window cleaners, bathroom cleaners, furniture and floor polish products, combustible air fresheners, spray air fresheners, electric air fresheners, passive air fresheners, coating products for leather and textiles, hair styling products, spray deodorants and perfumes. The analysis of the results from the household survey (1st phase) focused on identifying consumer behaviour patterns (selection criteria, frequency of use, quantities, period of use and ventilation conditions during product use). This can provide valuable input to modelling studies, as this information is not reported in the open literature. The above results were further analysed (2nd phase), to provide the basis for the development of 'most representative worst-case scenarios' regarding the use of the 15 products by home-based population groups (housekeepers and retired people), in four geographical regions in Europe. These scenarios will be used for the exposure and health risk assessment within the EPHECT project. To the best of our knowledge, it is the first time that daily worst-case scenarios are presented in the scientific published literature concerning the use of a wide range of 15 consumer products across Europe. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Analysis of Separation Corridors for Visiting Vehicles from the International Space Station
NASA Technical Reports Server (NTRS)
Zaczek, Mariusz P.; Schrock, Rita R.; Schrock, Mark B.; Lowman, Bryan C.
2011-01-01
The International Space Station (ISS) is a very dynamic vehicle with many operational constraints that affect its performance, operations, and vehicle lifetime. Most constraints are designed to alleviate various safety concerns that are a result of dynamic activities between the ISS and various Visiting Vehicles (VVs). One such constraint that has been in place for Russian Vehicle (RV) operations is the limitation placed on Solar Array (SA) positioning in order to prevent collisions during separation and subsequent relative motion of VVs. An unintended consequence of the SA constraint has been the impacts to the operational flexibility of the ISS resulting from the reduced power generation capability as well as from a reduction in the operational lifetime of various SA components. The purpose of this paper is to discuss the technique and the analysis that were applied in order to relax the SA constraints for RV undockings, thereby improving both the ISS operational flexibility and extending its lifetime for many years to come. This analysis focused on the effects of the dynamic motion that occur both prior to and following RV separations. The analysis involved a parametric approach in the conservative application of various initial conditions and assumptions. These included the use of the worst case minimum and maximum vehicle configurations, worst case initial attitudes and attitude rates, and the worst case docking port separation dynamics. Separations were calculated for multiple ISS docking ports, at varied deviations from the nominal undocking attitudes and included the use of two separate attitude control schemes: continuous free-drift and a post separation attitude hold. The analysis required numerical propagation of both the separation motion and the vehicle attitudes using 3-degree-of-freedom (DOF) relative motion equations coupled with rigid body rotational dynamics to generate a large set of separation trajectories.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Ecological risk estimation of organophosphorus pesticides in riverine ecosystems.
Wee, Sze Yee; Aris, Ahmad Zaharin
2017-12-01
Pesticides are of great concern because of their existence in ecosystems at trace concentrations. Worldwide pesticide use and its ecological impacts (i.e., altered environmental distribution and toxicity of pesticides) have increased over time. Exposure and toxicity studies are vital for reducing the extent of pesticide exposure and risk to the environment and humans. Regional regulatory actions may be less relevant in some regions because the contamination and distribution of pesticides vary across regions and countries. The risk quotient (RQ) method was applied to assess the potential risk of organophosphorus pesticides (OPPs), primarily focusing on riverine ecosystems. Using the available ecotoxicity data, aquatic risks from OPPs (diazinon and chlorpyrifos) in the surface water of the Langat River, Selangor, Malaysia were evaluated based on general (RQ m ) and worst-case (RQ ex ) scenarios. Since the ecotoxicity of quinalphos has not been well established, quinalphos was excluded from the risk assessment. The calculated RQs indicate medium risk (RQ m = 0.17 and RQ ex = 0.66; 0.1 ≤ RQ < 1) of overall diazinon. The overall chlorpyrifos exposure was observed at high risk (RQ ≥ 1) based on RQ m and RQ ex at 1.44 and 4.83, respectively. A contradictory trend of RQs > 1 (high risk) was observed for both the general and worst cases of chlorpyrifos, but only for the worst cases of diazinon at all sites from downstream to upstream regions. Thus, chlorpyrifos posed a higher risk than diazinon along the Langat River, suggesting that organisms and humans could be exposed to potentially high levels of OPPs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach.
Zakov, Shay; Tsur, Dekel; Ziv-Ukelson, Michal
2011-08-18
RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach
2011-01-01
Background RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. Results We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. Conclusions The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms. PMID:21851589
Walser, Tobias; Juraske, Ronnie; Demou, Evangelia; Hellweg, Stefanie
2014-01-01
A pronounced presence of toluene from rotogravure printed matter has been frequently observed indoors. However, its consequences to human health in the life cycle of magazines are poorly known. Therefore, we quantified human-health risks in indoor environments with Risk Assessment (RA) and impacts relative to the total impact of toxic releases occurring in the life cycle of a magazine with Life Cycle Assessment (LCA). We used a one-box indoor model to estimate toluene concentrations in printing facilities, newsstands, and residences in a best, average, and worst-case scenario. The modeled concentrations are in the range of the values measured in on-site campaigns. Toluene concentrations can be close or even surpass the occupational legal thresholds in printing facilities in realistic worst-case scenarios. The concentrations in homes can surpass the US EPA reference dose (69 μg/kg/day) in worst-case scenarios, but are still at least 1 order of magnitude lower than in press rooms or newsstands. However, toluene inhaled at home becomes the dominant contribution to the total potential human toxicity impacts of toluene from printed matter when assessed with LCA, using the USEtox method complemented with indoor characterization factors for toluene. The significant contribution (44%) of toluene exposure in production, retail, and use in households, to the total life cycle impact of a magazine in the category of human toxicity, demonstrates that the indoor compartment requires particular attention in LCA. While RA works with threshold levels, LCA assumes that every toxic emission causes an incremental change to the total impact. Here, the combination of the two paradigms provides valuable information on the life cycle stages of printed matter.
Boehmler, Erick M.; Degnan, James R.
1997-01-01
year discharges. In addition, the incipient roadway-overtopping discharge is determined and analyzed as another potential worst-case scour scenario. Total scour at a highway crossing is comprised of three components: 1) long-term streambed degradation; 2) contraction scour (due to accelerated flow caused by a reduction in flow area at a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute depths for contraction and local scour and a summary of the results of these computations follows. Contraction scour for all modelled flows ranged from 1.2 to 1.8 feet. The worst-case contraction scour occurred at the incipient overtopping discharge, which is less than the 500-year discharge. Abutment scour ranged from 17.7 to 23.7 feet. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
... reaction can vary from mild to severe. In rare cases, the person with the rash needs to be treated in the hospital. The worst symptoms are often seen during days 4 to 7 after coming in contact with the plant. The rash may last for 1 to 3 ...
Closed Environment Module - Modularization and extension of the Virtual Habitat
NASA Astrophysics Data System (ADS)
Plötner, Peter; Czupalla, Markus; Zhukov, Anton
2013-12-01
The Virtual Habitat (V-HAB), is a Life Support System (LSS) simulation, created to perform dynamic simulation of LSS's for future human spaceflight missions. It allows the testing of LSS robustness by means of computer simulations, e.g. of worst case scenarios.
49 CFR 238.431 - Brake system.
Code of Federal Regulations, 2011 CFR
2011-10-01
... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow... a brake rate consistent with prevailing adhesion, passenger safety, and brake system thermal... adhesion control system designed to automatically adjust the braking force on each wheel to prevent sliding...
40 CFR 300.135 - Response operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY... discharge is a worst case discharge as discussed in § 300.324; the pathways to human and environmental exposure; the potential impact on human health, welfare, and safety and the environment; whether the...
Management of reliability and maintainability; a disciplined approach to fleet readiness
NASA Technical Reports Server (NTRS)
Willoughby, W. J., Jr.
1981-01-01
Material acquisition fundamentals were reviewed and include: mission profile definition, stress analysis, derating criteria, circuit reliability, failure modes, and worst case analysis. Military system reliability was examined with emphasis on the sparing of equipment. The Navy's organizational strategy for 1980 is presented.
Empirical Modeling Of Single-Event Upset
NASA Technical Reports Server (NTRS)
Zoutendyk, John A.; Smith, Lawrence S.; Soli, George A.; Thieberger, Peter; Smith, Stephen L.; Atwood, Gregory E.
1988-01-01
Experimental study presents examples of empirical modeling of single-event upset in negatively-doped-source/drain metal-oxide-semiconductor static random-access memory cells. Data supports adoption of simplified worst-case model in which cross sectionof SEU by ion above threshold energy equals area of memory cell.
Kennedy, Reese D; Cheavegatti-Gianotto, Adriana; de Oliveira, Wladecir S; Lirette, Ronald P; Hjelle, Jerry J
2018-01-01
Insect-protected sugarcane that expresses Cry1Ab has been developed in Brazil. Analysis of trade information has shown that effectively all the sugarcane-derived Brazilian exports are raw or refined sugar and ethanol. The fact that raw and refined sugar are highly purified food ingredients, with no detectable transgenic protein, provides an interesting case study of a generalized safety assessment approach. In this study, both the theoretical protein intakes and safety assessments of Cry1Ab, Cry1Ac, NPTII, and Bar proteins used in insect-protected biotechnology crops were examined. The potential consumption of these proteins was examined using local market research data of average added sugar intakes in eight diverse and representative Brazilian raw and refined sugar export markets (Brazil, Canada, China, Indonesia, India, Japan, Russia, and the USA). The average sugar intakes, which ranged from 5.1 g of added sugar/person/day (India) to 126 g sugar/p/day (USA) were used to calculated possible human exposure. The theoretical protein intake estimates were carried out in the "Worst-case" scenario, assumed that 1 μg of newly-expressed protein is detected/g of raw or refined sugar; and the "Reasonable-case" scenario assumed 1 ng protein/g sugar. The "Worst-case" scenario was based on results of detailed studies of sugarcane processing in Brazil that showed that refined sugar contains less than 1 μg of total plant protein /g refined sugar. The "Reasonable-case" scenario was based on assumption that the expression levels in stalk of newly-expressed proteins were less than 0.1% of total stalk protein. Using these calculated protein intake values from the consumption of sugar, along with the accepted NOAEL levels of the four representative proteins we concluded that safety margins for the "Worst-case" scenario ranged from 6.9 × 10 5 to 5.9 × 10 7 and for the "Reasonable-case" scenario ranged from 6.9 × 10 8 to 5.9 × 10 10 . These safety margins are very high due to the extremely low possible exposures and the high NOAELs for these non-toxic proteins. This generalized approach to the safety assessment of highly purified food ingredients like sugar illustrates that sugar processed from Brazilian GM varieties are safe for consumption in representative markets globally.
Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction
NASA Technical Reports Server (NTRS)
Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.
2006-01-01
The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.
Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc
2013-12-01
The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.
Full band all-sky search for periodic gravitational waves in the O1 LIGO data
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allen, G.; Allocca, A.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Angelova, S. V.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Atallah, D. V.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Austin, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barkett, K.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bawaj, M.; Bayley, J. C.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Bero, J. J.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Biscoveanu, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonilla, E.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bossie, K.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderón; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerdá-Durán, P.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chase, E.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, X.; Chen, Y.; Cheng, H.-P.; Chia, H. Y.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciecielag, P.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Clearwater, P.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Cohen, D.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Cordero-Carrión, I.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, E. T.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Dálya, G.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Demos, N.; Denker, T.; Dent, T.; De Pietri, R.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; De Rossi, C.; DeSalvo, R.; de Varona, O.; Devenson, J.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorosh, O.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Dreissigacker, C.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dupej, P.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Estevez, D.; Etienne, Z. B.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fee, C.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Finstad, D.; Fiori, I.; Fiorucci, D.; Fishbach, M.; Fisher, R. P.; Fitz-Axen, M.; Flaminio, R.; Fletcher, M.; Fong, H.; Font, J. A.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garcia-Quiros, C.; Garufi, F.; Gateley, B.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; Goncharov, B.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Gretarsson, E. M.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Halim, O.; Hall, B. R.; Hall, E. D.; Hamilton, E. Z.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinderer, T.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hreibi, A.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kamai, B.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, K.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kinley-Hanlon, M.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Knowles, T. D.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Linker, S. D.; Littenberg, T. B.; Liu, J.; Lo, R. K. L.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macas, R.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Markowitz, A.; Maros, E.; Marquina, A.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Mason, K.; Massera, E.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McNeill, L.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, B. B.; Miller, J.; Millhouse, M.; Milovich-Goff, M. C.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moffa, D.; Moggi, A.; Mogushi, K.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muñiz, E. A.; Muratore, M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Neilson, J.; Nelemans, G.; Nelson, T. J. N.; Nery, M.; Neunzert, A.; Nevin, L.; Newport, J. M.; Newton, G.; Ng, K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; North, C.; Nuttall, L. K.; Oberling, J.; O'Dea, G. D.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Okada, M. A.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ossokine, S.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, Howard; Pan, Huang-Wei; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Parida, A.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patil, M.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pirello, M.; Pisarski, A.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Pratten, G.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rajbhandari, B.; Rakhmanov, M.; Ramirez, K. E.; Ramos-Buades, A.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Ren, W.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Rutins, G.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sanchez, L. E.; Sanchis-Gual, N.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheel, M.; Scheuer, J.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shaner, M. B.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, L. P.; Singh, A.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Somala, S.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staats, K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stevenson, S. P.; Stone, R.; Stops, D. J.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Strunk, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Suresh, J.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Tait, S. C.; Talbot, C.; Talukder, D.; Tanner, D. B.; Tao, D.; Tápai, M.; Taracchini, A.; Tasson, J. D.; Taylor, J. A.; Taylor, R.; Tewari, S. V.; Theeg, T.; Thies, F.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torres-Forné, A.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tsukada, L.; Tsuna, D.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, W. H.; Wang, Y. F.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westerweck, J.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Wilken, D.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wysocki, D. M.; Xiao, S.; Yamamoto, H.; Yancey, C. C.; Yang, L.; Yap, M. J.; Yazback, M.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadroźny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2018-05-01
We report on a new all-sky search for periodic gravitational waves in the frequency band 475-2000 Hz and with a frequency time derivative in the range of [-1.0 ,+0.1 ] ×1 0-8 Hz /s . Potential signals could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our Galaxy. This search uses the data from Advanced LIGO's first observational run O1. No gravitational-wave signals were observed, and upper limits were placed on their strengths. For completeness, results from the separately published low-frequency search 20-475 Hz are included as well. Our lowest upper limit on worst-case (linearly polarized) strain amplitude h0 is ˜4 ×1 0-25 near 170 Hz, while at the high end of our frequency range, we achieve a worst-case upper limit of 1.3 ×1 0-24. For a circularly polarized source (most favorable orientation), the smallest upper limit obtained is ˜1.5 ×1 0-25.
Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?
NASA Astrophysics Data System (ADS)
Ferrie, Chris; Blume-Kohout, Robin
2011-03-01
Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.
Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Rault, Didier F. G.
1994-01-01
A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.
Thermal-hydraulic analysis of N Reactor graphite and shield cooling system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, J.O.; Schmitt, B.E.
1988-02-01
A series of bounding (worst-case) calculations were performed using a detailed hydrodynamic RELAP5 model of the N Reactor graphite and shield cooling system (GSCS). These calculations were specifically aimed to answer issues raised by the Westinghouse Independent Safety Review (WISR) committee. These questions address the operability of the GSCS during a worst-case degraded-core accident that requires the GDCS to mitigate the consequences of the accident. An accident scenario previously developed was designed as the hydrogen-mitigation design-basis accident (HMDBA). Previous HMDBA heat transfer analysis,, using the TRUMP-BD code, was used to define the thermal boundary conditions that the GSDS may bemore » exposed to. These TRUMP/HMDBA analysis results were used to define the bounding operating conditions of the GSCS during the course of an HMDBA transient. Nominal and degraded GSCS scenarios were investigated using RELAP5 within or at the bounds of the HMDBA transient. 10 refs., 42 figs., 10 tabs.« less
Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift
NASA Astrophysics Data System (ADS)
Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.
2012-01-01
This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.
Burns, Ronda L.; Severance, Timothy
1997-01-01
Contraction scour for all modelled flows ranged from 15.8 to 22.5 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.7 to 11.1 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.
A CMOS matrix for extracting MOSFET parameters before and after irradiation
NASA Technical Reports Server (NTRS)
Blaes, B. R.; Buehler, M. G.; Lin, Y.-S.; Hicks, K. A.
1988-01-01
An addressable matrix of 16 n- and 16 p-MOSFETs was designed to extract the dc MOSFET parameters for all dc gate bias conditions before and after irradiation. The matrix contains four sets of MOSFETs, each with four different geometries that can be biased independently. Thus the worst-case bias scenarios can be determined. The MOSFET matrix was fabricated at a silicon foundry using a radiation-soft CMOS p-well LOCOS process. Co-60 irradiation results for the n-MOSFETs showed a threshold-voltage shift of -3 mV/krad(Si), whereas the p-MOSFETs showed a shift of 21 mV/krad(Si). The worst-case threshold-voltage shift occurred for the n-MOSFETs, with a gate bias of 5 V during the anneal. For the p-MOSFETs, biasing did not affect the shift in the threshold voltage. A parasitic MOSFET dominated the leakage of the n-MOSFET biased with 5 V on the gate during irradiation. Co-60 test results for other parameters are also presented.
Mad cows and computer models: the U.S. response to BSE.
Ackerman, Frank; Johnecheck, Wendy A
2008-01-01
The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.
Modelling the long-term evolution of worst-case Arctic oil spills.
Blanken, Hauke; Tremblay, Louis Bruno; Gaskin, Susan; Slavin, Alexander
2017-03-15
We present worst-case assessments of contamination in sea ice and surface waters resulting from hypothetical well blowout oil spills at ten sites in the Arctic Ocean basin. Spill extents are estimated by considering Eulerian passive tracers in the surface ocean of the MITgcm (a hydrostatic, coupled ice-ocean model). Oil in sea ice, and contamination resulting from melting of oiled ice, is tracked using an offline Lagrangian scheme. Spills are initialized on November 1st 1980-2010 and tracked for one year. An average spill was transported 1100km and potentially affected 1.1 million km 2 . The direction and magnitude of simulated oil trajectories are consistent with known large-scale current and sea ice circulation patterns, and trajectories frequently cross international boundaries. The simulated trajectories of oil in sea ice match observed ice drift trajectories well. During the winter oil transport by drifting sea ice is more significant than transport with surface currents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Homaeinezhad, M R; Erfanianmoshiri-Nejad, M; Naseri, H
2014-01-01
The goal of this study is to introduce a simple, standard and safe procedure to detect and to delineate P and T waves of the electrocardiogram (ECG) signal in real conditions. The proposed method consists of four major steps: (1) a secure QRS detection and delineation algorithm, (2) a pattern recognition algorithm designed for distinguishing various ECG clusters which take place between consecutive R-waves, (3) extracting template of the dominant events of each cluster waveform and (4) application of the correlation analysis in order to delineate automatically the P- and T-waves in noisy conditions. The performance characteristics of the proposed P and T detection-delineation algorithm are evaluated versus various ECG signals whose qualities are altered from the best to the worst cases based on the random-walk noise theory. Also, the method is applied to the MIT-BIH Arrhythmia and the QT databases for comparing some parts of its performance characteristics with a number of P and T detection-delineation algorithms. The conducted evaluations indicate that in a signal with low quality value of about 0.6, the proposed method detects the P and T events with sensitivity Se=85% and positive predictive value of P+=89%, respectively. In addition, at the same quality, the average delineation errors associated with those ECG events are 45 and 63ms, respectively. Stable delineation error, high detection accuracy and high noise tolerance were the most important aspects considered during development of the proposed method. © 2013 Elsevier Ltd. All rights reserved.
Teshima, Tara Lynn; Patel, Vaibhav; Mainprize, James G; Edwards, Glenn; Antonyshyn, Oleh M
2015-07-01
The utilization of three-dimensional modeling technology in craniomaxillofacial surgery has grown exponentially during the last decade. Future development, however, is hindered by the lack of a normative three-dimensional anatomic dataset and a statistical mean three-dimensional virtual model. The purpose of this study is to develop and validate a protocol to generate a statistical three-dimensional virtual model based on a normative dataset of adult skulls. Two hundred adult skull CT images were reviewed. The average three-dimensional skull was computed by processing each CT image in the series using thin-plate spline geometric morphometric protocol. Our statistical average three-dimensional skull was validated by reconstructing patient-specific topography in cranial defects. The experiment was repeated 4 times. In each case, computer-generated cranioplasties were compared directly to the original intact skull. The errors describing the difference between the prediction and the original were calculated. A normative database of 33 adult human skulls was collected. Using 21 anthropometric landmark points, a protocol for three-dimensional skull landmarking and data reduction was developed and a statistical average three-dimensional skull was generated. Our results show the root mean square error (RMSE) for restoration of a known defect using the native best match skull, our statistical average skull, and worst match skull was 0.58, 0.74, and 4.4 mm, respectively. The ability to statistically average craniofacial surface topography will be a valuable instrument for deriving missing anatomy in complex craniofacial defects and deficiencies as well as in evaluating morphologic results of surgery.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
String Stability of a Linear Formation Flight Control System
NASA Technical Reports Server (NTRS)
Allen, Michael J.; Ryan, Jack; Hanson, Curtis E.; Parle, James F.
2002-01-01
String stability analysis of an autonomous formation flight system was performed using linear and nonlinear simulations. String stability is a measure of how position errors propagate from one vehicle to another in a cascaded system. In the formation flight system considered here, each i(sup th) aircraft uses information from itself and the preceding ((i-1)(sup th)) aircraft to track a commanded relative position. A possible solution for meeting performance requirements with such a system is to allow string instability. This paper explores two results of string instability and outlines analysis techniques for string unstable systems. The three analysis techniques presented here are: linear, nonlinear formation performance, and ride quality. The linear technique was developed from a worst-case scenario and could be applied to the design of a string unstable controller. The nonlinear formation performance and ride quality analysis techniques both use nonlinear formation simulation. Three of the four formation-controller gain-sets analyzed in this paper were limited more by ride quality than by performance. Formations of up to seven aircraft in a cascaded formation could be used in the presence of light gusts with this string unstable system.
NASA Astrophysics Data System (ADS)
Arabi, Ehsan; Gruenwald, Benjamin C.; Yucelen, Tansel; Nguyen, Nhan T.
2018-05-01
Research in adaptive control algorithms for safety-critical applications is primarily motivated by the fact that these algorithms have the capability to suppress the effects of adverse conditions resulting from exogenous disturbances, imperfect dynamical system modelling, degraded modes of operation, and changes in system dynamics. Although government and industry agree on the potential of these algorithms in providing safety and reducing vehicle development costs, a major issue is the inability to achieve a-priori, user-defined performance guarantees with adaptive control algorithms. In this paper, a new model reference adaptive control architecture for uncertain dynamical systems is presented to address disturbance rejection and uncertainty suppression. The proposed framework is predicated on a set-theoretic adaptive controller construction using generalised restricted potential functions.The key feature of this framework allows the system error bound between the state of an uncertain dynamical system and the state of a reference model, which captures a desired closed-loop system performance, to be less than a-priori, user-defined worst-case performance bound, and hence, it has the capability to enforce strict performance guarantees. Examples are provided to demonstrate the efficacy of the proposed set-theoretic model reference adaptive control architecture.
Regularization techniques on least squares non-uniform fast Fourier transform.
Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena
2013-05-01
Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.
Landsat-7 ETM+ Radiometric Calibration Status
NASA Technical Reports Server (NTRS)
Barsi, Julia A.; Markham, Brian L.; Czapla-Myers, Jeffrey S.; Helder, Dennis L.; Hook, Simon J.; Schott, John R; Haque, Md. Obaidul
2016-01-01
Now in its 17th year of operation, the Enhanced Thematic Mapper + (ETM+), on board the Landsat-7 satellite, continues to systematically acquire imagery of the Earth to add to the 40+ year archive of Landsat data. Characterization of the ETM+ on-orbit radiometric performance has been on-going since its launch in 1999. The radiometric calibration of the reflective bands is still monitored using on-board calibration devices, though the Pseudo-Invariant Calibration Sites (PICS) method has proven to be an effect tool as well. The calibration gains were updated in April 2013 based primarily on PICS results, which corrected for a change of as much as -0.2%/year degradation in the worst case bands. A new comparison with the SADE database of PICS results indicates no additional degradation in the updated calibration. PICS data are still being tracked though the recent trends are not well understood. The thermal band calibration was updated last in October 2013 based on a continued calibration effort by NASA/Jet Propulsion Lab and Rochester Institute of Technology. The update accounted for a 0.31 W/sq m/ sr/micron bias error. The updated lifetime trend is now stable to within + 0.4K.
Fattori, Giovanni; Safai, Sairos; Carmona, Pablo Fernández; Peroni, Marta; Perrin, Rosalind; Weber, Damien Charles; Lomax, Antony John
2017-03-31
Motion monitoring is essential when treating non-static tumours with pencil beam scanned protons. 4D medical imaging typically relies on the detected body surface displacement, considered as a surrogate of the patient's anatomical changes, a concept similarly applied by most motion mitigation techniques. In this study, we investigate benefits and pitfalls of optical and electromagnetic tracking, key technologies for non-invasive surface motion monitoring, in the specific environment of image-guided, gantry-based proton therapy. Polaris SPECTRA optical tracking system and the Aurora V3 electromagnetic tracking system from Northern Digital Inc. (NDI, Waterloo, CA) have been compared both technically, by measuring tracking errors and system latencies under laboratory conditions, and clinically, by assessing their practicalities and sensitivities when used with imaging devices and PBS treatment gantries. Additionally, we investigated the impact of using different surrogate signals, from different systems, on the reconstructed 4D CT images. Even though in controlled laboratory conditions both technologies allow for the localization of static fiducials with sub-millimetre jitter and low latency (31.6 ± 1 msec worst case), significant dynamic and environmental distortions limit the potential of the electromagnetic approach in a clinical setting. The measurement error in case of close proximity to a CT scanner is up to 10.5 mm and precludes its use for the monitoring of respiratory motion during 4DCT acquisitions. Similarly, the motion of the treatment gantry distorts up to 22 mm the tracking result. Despite the line of sight requirement, the optical solution offers the best potential, being the most robust against environmental factors and providing the highest spatial accuracy. The significant difference in the temporal location of the reconstructed phase points is used to speculate on the need to apply the same monitoring system for imaging and treatment to ensure the consistency of detected phases.
Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu
2017-11-01
This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.
Multiple Microcomputer Control Algorithm.
1979-09-01
discrete and semaphore supervisor calls can be used with tasks in separate processors, in which case they are maintained in shared memory. Operations on ...the source or destination operand specifier of each mode in most cases . However, four of the 16 general register addressing modes and one of the 8 pro...instruction time is based on the specified usage factors and the best cast, and worst case execution times for the instruc- 1I 5 1NAVTRAEQZJ1PCrN M’.V7~j
Pelham, Sabra D
2011-03-01
English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.
Investigation of the Human Response to Upper Torso Retraction with Weighted Helmets
2013-09-01
coverage of each test. The Kodak system is capable of recording high-speed motion up to a rate of 1000 frames per second. For this study , the video...the measured center-of-gravity (CG) of the worst- case test helmet fell outside the current limits and no injuries were observed, it can be stated...8 Figure 7. T-test Cases 1-9 (0 lb Added Helmet Weight
1980-08-01
tile se(q uenw threshold does not utilize thle D)C level inlforiat ion and the time thlresliolditig adaptively adjusts for DC lvel . This characteristic...lowest 256/8 = 32 elements. The above observation can be mathematically proven to also relate the fact that the lowest (NT/W) elements can, at worst case
ERIC Educational Resources Information Center
Fitzgerald, Patricia L.
1998-01-01
Although only 5% of the population has severe food allergies, school business officials must be prepared for the worst-case scenario. Banning foods and segregating allergic children are harmful practices. Education and sensible behavior are the best medicine when food allergies and intolerances are involved. Resources are listed. (MLH)
Shuttle ECLSS ammonia delivery capability
NASA Technical Reports Server (NTRS)
1976-01-01
The possible effects of excessive requirements on ammonia flow rates required for entry cooling, due to extreme temperatures, on mission plans for the space shuttles, were investigated. An analysis of worst case conditions was performed, and indicates that adequate flow rates are available. No mission impact is therefore anticipated.
41 CFR 102-80.145 - What is meant by “flashover”?
Code of Federal Regulations, 2010 CFR
2010-07-01
...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...
41 CFR 102-80.145 - What is meant by “flashover”?
Code of Federal Regulations, 2011 CFR
2011-01-01
...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...
Trapped Proton Environment in Medium-Earth Orbit (2000-2010)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Friedel, Reinhard Hans; Kippen, Richard Marc
This report describes the method used to derive fluxes of the trapped proton belt along the GPS orbit (i.e., a Medium-Earth Orbit) during 2000 – 2010, a period almost covering a solar cycle. This method utilizes a newly developed empirical proton radiation-belt model, with the model output scaled by GPS in-situ measurements, to generate proton fluxes that cover a wide range of energies (50keV- 6MeV) and keep temporal features as well. The new proton radiation-belt model is developed based upon CEPPAD proton measurements from the Polar mission (1996 – 2007). Comparing to the de-facto standard empirical model of AP8, thismore » model is not only based upon a new data set representative of the proton belt during the same period covered by GPS, but can also provide statistical information of flux values such as worst cases and occurrence percentiles instead of solely the mean values. The comparison shows quite different results from the two models and suggests that the commonly accepted error factor of 2 on the AP8 flux output over-simplifies and thus underestimates variations of the proton belt. Output fluxes from this new model along the GPS orbit are further scaled by the ns41 in-situ data so as to reflect the dynamic nature of protons in the outer radiation belt at geomagnetically active times. Derived daily proton fluxes along the GPS ns41 orbit, whose data files are delivered along with this report, are depicted to illustrate the trapped proton environment in the Medium-Earth Orbit. Uncertainties on those daily proton fluxes from two sources are evaluated: One is from the new proton-belt model that has error factors < ~3; the other is from the in-situ measurements and the error factors could be ~ 5.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faught, J Tonigan; Johnson, J; Stingo, F
2015-06-15
Purpose: To assess the perception of TG-142 tolerance level dose delivery failures in IMRT and the application of FMEA process to this specific aspect of IMRT. Methods: An online survey was distributed to medical physicists worldwide that briefly described 11 different failure modes (FMs) covered by basic quality assurance in step- and-shoot IMRT at or near TG-142 tolerance criteria levels. For each FM, respondents estimated the worst case H&N patient percent dose error and FMEA scores for Occurrence, Detectability, and Severity. Demographic data was also collected. Results: 181 individual and three group responses were submitted. 84% were from North America.more » Most (76%) individual respondents performed at least 80% clinical work and 92% were nationally certified. Respondent medical physics experience ranged from 2.5–45 years (average 18 years). 52% of individual respondents were at least somewhat familiar with FMEA, while 17% were not familiar. Several IMRT techniques, treatment planning systems and linear accelerator manufacturers were represented. All FMs received widely varying scores ranging from 1–10 for occurrence, at least 1–9 for detectability, and at least 1–7 for severity. Ranking FMs by RPN scores also resulted in large variability, with each FM being ranked both most risky (1st ) and least risky (11th) by different respondents. On average MLC modeling had the highest RPN scores. Individual estimated percent dose errors and severity scores positively correlated (p<0.10) for each FM as expected. No universal correlations were found between the demographic information collected and scoring, percent dose errors, or ranking. Conclusion: FMs investigated overall were evaluated as low to medium risk, with average RPNs less than 110. The ranking of 11 FMs was not agreed upon by the community. Large variability in FMEA scoring may be caused by individual interpretation and/or experience, thus reflecting the subjective nature of the FMEA tool.« less
The FASTER Approach: A New Tool for Calculating Real-Time Tsunami Flood Hazards
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Cross, A.; Johnson, L.; Miller, K.; Nicolini, T.; Whitmore, P.
2014-12-01
In the aftermath of the 2010 Chile and 2011 Japan tsunamis that struck the California coastline, emergency managers requested that the state tsunami program provide more detailed information about the flood potential of distant-source tsunamis well ahead of their arrival time. The main issue is that existing tsunami evacuation plans call for evacuation of the predetermined "worst-case" tsunami evacuation zone (typically at a 30- to 50-foot elevation) during any "Warning" level event; the alternative is to not call an evacuation at all. A solution to provide more detailed information for secondary evacuation zones has been the development of tsunami evacuation "playbooks" to plan for tsunami scenarios of various sizes and source locations. To determine a recommended level of evacuation during a distant-source tsunami, an analytical tool has been developed called the "FASTER" approach, an acronym for factors that influence the tsunami flood hazard for a community: Forecast Amplitude, Storm, Tides, Error in forecast, and the Run-up potential. Within the first couple hours after a tsunami is generated, the National Tsunami Warning Center provides tsunami forecast amplitudes and arrival times for approximately 60 coastal locations in California. At the same time, the regional NOAA Weather Forecast Offices in the state calculate the forecasted coastal storm and tidal conditions that will influence tsunami flooding. Providing added conservatism in calculating tsunami flood potential, we include an error factor of 30% for the forecast amplitude, which is based on observed forecast errors during recent events, and a site specific run-up factor which is calculated from the existing state tsunami modeling database. The factors are added together into a cumulative FASTER flood potential value for the first five hours of tsunami activity and used to select the appropriate tsunami phase evacuation "playbook" which is provided to each coastal community shortly after the forecast is provided.
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
Computational aspects of geometric correction data generation in the LANDSAT-D imagery processing
NASA Technical Reports Server (NTRS)
Levine, I.
1981-01-01
A method is presented for systematic and geodetic correction data calculation. It is based on presentation of image distortions as a sum of nominal distortions and linear effects caused by variation of the spacecraft position and attitude variables from their nominals. The method may be used for both MSS and TM image data and it is incorporated into the processing by means of mostly offline calculations. Modeling shows that the maximal of the method are of the order of 5m at the worst point in a frame; the standard deviations of the average errors less than .8m.
Gomberg, Joan S.; Agnew, Duncan Carr
1996-01-01
The dynamic strains associated with seismic waves may play a significant role in earthquake triggering, hydrological and magmatic changes, earthquake damage, and ground failure. We determine how accurately dynamic strains may be estimated from seismometer data and elastic-wave theory by comparing such estimated strains with strains measured on a three-component long-base strainmeter system at Pin??on Flat, California. We quantify the uncertainties and errors through cross-spectral analysis of data from three regional earthquakes (the M0 = 4 ?? 1017 N-m St. George, Utah; M0 = 4 ?? 1017 N-m Little Skull Mountain, Nevada; and M0 = 1 ?? 1019 N-m Northridge, California, events at distances of 470, 345, and 206 km, respectively). Our analysis indicates that in most cases the phase of the estimated strain matches that of the observed strain quite well (to within the uncertainties, which are about ?? 0.1 to ?? 0.2 cycles). However, the amplitudes are often systematically off, at levels exceeding the uncertainties (about 20%); in one case, the predicted strain amplitudes are nearly twice those observed. We also observe significant ?????? strains (?? = tangential direction), which should be zero theoretically; in the worst case, the rms ?????? strain exceeds the other nonzero components. These nonzero ?????? strains cannot be caused by deviations of the surface-wave propagation paths from the expected azimuth or by departures from the plane-wave approximation. We believe that distortion of the strain field by topography or material heterogeneities give rise to these complexities.
Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system
Stewart, M.; Langevin, C.
1999-01-01
A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield-scale effects of pumping, using a 75 day long simulation without recharge to predict the long-term behavior of the wellfield was an inappropriate application, resulting in significant underprediction of wellfield effects.A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1??105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent stead
40 CFR 266.106 - Standards to control metals emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...
40 CFR 266.106 - Standards to control metals emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...
49 CFR 238.431 - Brake system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Brake system. 238.431 Section 238.431... Equipment § 238.431 Brake system. (a) A passenger train's brake system shall be capable of stopping the... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow...
The off-site consequence analysis (OCA) evaluates the potential for worst-case and alternative accidental release scenarios to harm the public and environment around the facility. Public disclosure would likely reduce the number/severity of incidents.
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2011 CFR
2011-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2010 CFR
2010-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2010 CFR
2010-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2011 CFR
2011-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2012 CFR
2012-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2013 CFR
2013-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 154.1029 - Worst case discharge.
Code of Federal Regulations, 2014 CFR
2014-07-01
... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2013 CFR
2013-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2014 CFR
2014-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
33 CFR 155.1230 - Response plan development and evaluation criteria.
Code of Federal Regulations, 2012 CFR
2012-07-01
... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...
Competitive Strategies and Financial Performance of Small Colleges
ERIC Educational Resources Information Center
Barron, Thomas A., Jr.
2017-01-01
Many institutions of higher education are facing significant financial challenges, resulting in diminished economic viability and, in the worst cases, the threat of closure (Moody's Investor Services, 2015). The study was designed to explore the effectiveness of competitive strategies for small colleges in terms of financial performance. Five…
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2013 CFR
2013-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2012 CFR
2012-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2014 CFR
2014-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
30 CFR 254.21 - How must I format my response plan?
Code of Federal Regulations, 2010 CFR
2010-07-01
... divide your response plan for OCS facilities into the sections specified in paragraph (b) and explained in the other sections of this subpart. The plan must have an easily found marker identifying each.... (ii) Contractual agreements. (iii) Worst case discharge scenario. (iv) Dispersant use plan. (v) In...
Safety in the Chemical Laboratory: Laboratory Air Quality: Part I. A Concentration Model.
ERIC Educational Resources Information Center
Butcher, Samuel S.; And Others
1985-01-01
Offers a simple model for estimating vapor concentrations in instructional laboratories. Three methods are described for measuring ventilation rates, and the results of measurements in six laboratories are presented. The model should provide a simple screening tool for evaluating worst-case personal exposures. (JN)
A Didactic Analysis of Functional Queues
ERIC Educational Resources Information Center
Rinderknecht, Christian
2011-01-01
When first introduced to the analysis of algorithms, students are taught how to assess the best and worst cases, whereas the mean and amortized costs are considered advanced topics, usually saved for graduates. When presenting the latter, aggregate analysis is explained first because it is the most intuitive kind of amortized analysis, often…
Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed
2015-01-01
This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299
Carstens, Keri; Anderson, Jennifer; Bachman, Pamela; De Schrijver, Adinda; Dively, Galen; Federici, Brian; Hamer, Mick; Gielkens, Marco; Jensen, Peter; Lamp, William; Rauschen, Stefan; Ridley, Geoff; Romeis, Jörg; Waggoner, Annabel
2012-08-01
Environmental risk assessments (ERA) support regulatory decisions for the commercial cultivation of genetically modified (GM) crops. The ERA for terrestrial agroecosystems is well-developed, whereas guidance for ERA of GM crops in aquatic ecosystems is not as well-defined. The purpose of this document is to demonstrate how comprehensive problem formulation can be used to develop a conceptual model and to identify potential exposure pathways, using Bacillus thuringiensis (Bt) maize as a case study. Within problem formulation, the insecticidal trait, the crop, the receiving environment, and protection goals were characterized, and a conceptual model was developed to identify routes through which aquatic organisms may be exposed to insecticidal proteins in maize tissue. Following a tiered approach for exposure assessment, worst-case exposures were estimated using standardized models, and factors mitigating exposure were described. Based on exposure estimates, shredders were identified as the functional group most likely to be exposed to insecticidal proteins. However, even using worst-case assumptions, the exposure of shredders to Bt maize was low and studies supporting the current risk assessments were deemed adequate. Determining if early tier toxicity studies are necessary to inform the risk assessment for a specific GM crop should be done on a case by case basis, and should be guided by thorough problem formulation and exposure assessment. The processes used to develop the Bt maize case study are intended to serve as a model for performing risk assessments on future traits and crops.
International Round-Robin Testing of Bulk Thermoelectrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsin; Porter, Wallace D; Bottner, Harold
2011-11-01
Two international round-robin studies were conducted on transport properties measurements of bulk thermoelectric materials. The study discovered current measurement problems. In order to get ZT of a material four separate transport measurements must be taken. The round-robin study showed that among the four properties Seebeck coefficient is the one can be measured consistently. Electrical resistivity has +4-9% scatter. Thermal diffusivity has similar +5-10% scatter. The reliability of the above three properties can be improved by standardizing test procedures and enforcing system calibrations. The worst problem was found in specific heat measurements using DSC. The probability of making measurement error ismore » great due to the fact three separate runs must be taken to determine Cp and the baseline shift is always an issue for commercial DSC. It is suggest the Dulong Petit limit be always used as a guide line for Cp. Procedures have been developed to eliminate operator and system errors. The IEA-AMT annex is developing standard procedures for transport properties testing.« less
Aircraft to aircraft intercomparison during SEMAPHORE
NASA Astrophysics Data System (ADS)
Lambert, Dominique; Durand, Pierre
1998-10-01
During the Structure des Echanges Mer-Atmosphère, Propriétés des Hétérogénéités Océaniques: Recherche Expérimentale (SEMAPHORE) experiment, performed in the Azores region in 1993, two French research aircraft were simultaneously used for in situ measurements in the atmospheric boundary layer. We present the results obtained from one intercomparison flight between the two aircraft. The mean parameters generally agree well, although the temperature has to be slightly shifted in order to be in agreement for the two aircraft. A detailed comparison of the turbulence parameters revealed no bias. The agreement is good for variances and is satisfactory for fluxes and skewness. A thorough study of the errors involved in flux computation revealed that the greatest accuracy is obtained for latent heat flux. Errors in sensible heat flux are considerably greater, and the worst results are obtained for momentum flux. The latter parameter, however, is more accurate than expected from previous parameterizations.
Ng, Kar Yong; Awang, Norhashidah
2018-01-06
Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst.
2000-08-01
forefoot with the foot in the neutral position, and (b) similar to (a) but with heel landing. Although the authors reported no absolute strain values...diameter of sensors (or, in the case of a rectangular sensor, width as measured along pin axis). Worst case : Strike line from inside edges of sensors...potoroo it is just prior to "toe strike ". The locomotion of the potoroo is described as digitigrade, unlike humans, who walk in a plantigrade manner
Space Based Intelligence, Surveillance, and Reconnaissance Contribution to Global Strike in 2035
2012-02-15
include using high altitude air platforms and airships as a short-term solution, and small satellites with an Operationally Responsive Space (ORS) launch...irreversible threats, along with a worst case scenario. Section IV provides greater detail of the high altitude air platform, airship , and commercial space...Resultantly, the U.S. could use high altitude air platforms, airships , and cyber to complement its space systems in case of denial, degradation, or
ERIC Educational Resources Information Center
Goldstein, Philip J.
2009-01-01
The phrase "worst since the Great Depression" has seemingly punctuated every economic report. The United States is experiencing the worst housing market, the worst unemployment level, and the worst drop in gross domestic product since the Great Depression. Although the steady drumbeat of bad news may have made everyone nearly numb, one…
The lionfish Pterois sp. invasion: Has the worst-case scenario come to pass?
Côté, I M; Smith, N S
2018-03-01
This review revisits the traits thought to have contributed to the success of Indo-Pacific lionfish Pterois sp. as an invader in the western Atlantic Ocean and the worst-case scenario about their potential ecological effects in light of the more than 150 studies conducted in the past 5 years. Fast somatic growth, resistance to parasites, effective anti-predator defences and an ability to circumvent predator recognition mechanisms by prey have probably contributed to rapid population increases of lionfish in the invaded range. However, evidence that lionfish are strong competitors is still ambiguous, in part because demonstrating competition is challenging. Geographic spread has likely been facilitated by the remarkable capacity of lionfish for prolonged fasting in combination with other broad physiological tolerances. Lionfish have had a large detrimental effect on native reef-fish populations in the northern part of the invaded range, but similar effects have yet to be seen in the southern Caribbean. Most other envisaged direct and indirect consequences of lionfish predation and competition, even those that might have been expected to occur rapidly, such as shifts in benthic composition, have yet to be realized. Lionfish populations in some of the first areas invaded have started to decline, perhaps as a result of resource depletion or ongoing fishing and culling, so there is hope that these areas have already experienced the worst of the invasion. In closing, we place lionfish in a broader context and argue that it can serve as a new model to test some fundamental questions in invasion ecology. © 2018 The Fisheries Society of the British Isles.
Kiatpongsan, Sorapop; Kim, Jane J
2014-01-01
Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15-30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain.
Kiatpongsan, Sorapop; Kim, Jane J.
2014-01-01
Background Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15–30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). Methods and Findings The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. Conclusions This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain. PMID:25198104
Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia
2018-06-01
The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.
78 FR 53494 - Dam Safety Modifications at Cherokee, Fort Loudoun, Tellico, and Watts Bar Dams
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... fundamental part of this mission was the construction and operation of an integrated system of dams and... by the Federal Emergency Management Agency, TVA prepares for the worst case flooding event in order... appropriate best management practices during all phases of construction and maintenance associated with the...
NASA Technical Reports Server (NTRS)
Guman, W. J. (Editor)
1971-01-01
Thermal vacuum design supporting thruster tests indicate no problems under the worst case conditions of sink temperature and spin rate. The reliability of the system was calculated to be 0.92 for a five-year mission. Minus the main energy storage capacitor it is 0.98.
40 CFR 300.320 - General pattern of response.
Code of Federal Regulations, 2010 CFR
2010-07-01
...., substantial threat to the public health or welfare of the United States, worst case discharge) of the... private party efforts, and where the discharge does not pose a substantial threat to the public health or... 40 Protection of Environment 27 2010-07-01 2010-07-01 false General pattern of response. 300.320...
Small Wars 2.0: A Working Paper on Land Force Planning After Iraq and Afghanistan
2011-02-01
official examination of future ground combat demands that look genetically distinct from those undertaken in the name of the WoT. The concept of...under the worst-case rubric but for very different reasons. The latter are small wars. However, that by no means aptly describes their size
The +vbar breakout during approach to Space Station Freedom
NASA Technical Reports Server (NTRS)
Dunham, Scott D.
1993-01-01
A set of burn profiles was developed to provide bounding jet firing histories for a +vbar breakout during approaches to Space Station Freedom. The delta-v sequences were designed to place the Orbiter on a safe trajectory under worst case conditions and to try to minimize plume impingement on Space Station Freedom structure.
A Comparison of Learning Technologies for Teaching Spacecraft Software Development
ERIC Educational Resources Information Center
Straub, Jeremy
2014-01-01
The development of software for spacecraft represents a particular challenge and is, in many ways, a worst case scenario from a design perspective. Spacecraft software must be "bulletproof" and operate for extended periods of time without user intervention. If the software fails, it cannot be manually serviced. Software failure may…
Providing Exemplars in the Learning Environment: The Case For and Against
ERIC Educational Resources Information Center
Newlyn, David
2013-01-01
Contemporary education has moved towards the requirement of express articulation of assessment criteria and standards in an attempt to provide legitimacy in the measurement of student performance/achievement. Exemplars are provided examples of best or worst practice in the educational environment, which are designed to assist students to increase…
Ageing of Insensitive DNAN Based Melt-Cast Explosives
2014-08-01
diurnal cycle (representative of the MEAO climate). Analysis of the ingredient composition, sensitiveness, mechanical and thermal properties was...first test condition was chosen to provide a worst-case scenario. Analysis of the ingredient composition, theoretical maximum density, sensitiveness...5 4.1.1 ARX-4027 Ingredient Analysis .............................................................. 5 4.1.2 ARX-4028 Ingredient Analysis
Power Analysis for Anticipated Non-Response in Randomized Block Designs
ERIC Educational Resources Information Center
Pustejovsky, James E.
2011-01-01
Recent guidance on the treatment of missing data in experiments advocates the use of sensitivity analysis and worst-case bounds analysis for addressing non-ignorable missing data mechanisms; moreover, plans for the analysis of missing data should be specified prior to data collection (Puma et al., 2009). While these authors recommend only that…
33 CFR 154.1120 - Operating restrictions and interim operating authorization.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Facility Operating in Prince William Sound, Alaska § 154.1120 Operating restrictions and interim operating authorization. (a) The owner or operator of a TAPAA facility may not operate in Prince William Sound, Alaska... practicable, a worst case discharge or a discharge of 200,000 barrels of oil, whichever is grater, in Prince...
Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground
ERIC Educational Resources Information Center
Oberg, Gunilla
2009-01-01
Newcomers often underestimate the challenges of interdisciplinary work and, as a rule, do not spend sufficient time to allow them to overcome differences and create common ground, which in turn leads to frustration, unresolved conflicts, and, in the worst case scenario, discontinued work. The key to successful collaboration is to facilitate the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-14
... notice is provided in accordance with the Council on Environmental Quality's regulations (40 CFR parts... interconnected, fabric-lined, sand-filled HESCO containers in order to safely pass predicted worst-case..., but will not necessarily be limited to, the potential impacts on water quality, aquatic and...
ERIC Educational Resources Information Center
Tercek, Patricia M.
This practicum study examined kindergarten teachers' perspectives regarding mixed-age groupings that included kindergarten students. The study focused on pedagogical reasons for using mixed-age grouping, ingredients necessary for successful implementation of a multiage program that includes kindergartners, and the perceived effects of a multiage…
Case Study: POLYTECH High School, Woodside, Delaware.
ERIC Educational Resources Information Center
Southern Regional Education Board, Atlanta, GA.
POLYTECH High School in Woodside, Delaware, has gone from being among the worst schools in the High Schools That Work (HSTW) network to among the best. Polytech, which is now a full-time technical high school, has improved its programs and outcomes by implementing a series of organizational, curriculum, teaching, guidance, and leadership changes,…
Commercially sterilized mussel meats (Mytilus chilensis): a study on process yield.
Almonacid, S; Bustamante, J; Simpson, R; Urtubia, A; Pinto, M; Teixeira, A
2012-06-01
The processing steps most responsible for yield loss in the manufacture of canned mussel meats are the thermal treatments of precooking to remove meats from shells, and thermal processing (retorting) to render the final canned product commercially sterile for long-term shelf stability. The objective of this study was to investigate and evaluate the impact of different combinations of process variables on the ultimate drained weight in the final mussel product (Mytilu chilensis), while verifying that any differences found were statistically and economically significant. The process variables selected for this study were precooking time, brine salt concentration, and retort temperature. Results indicated 2 combinations of process variables producing the widest difference in final drained weight, designated best combination and worst combination with 35% and 29% yield, respectively. Significance of this difference was determined by employing a Bootstrap methodology, which assumes an empirical distribution of statistical error. A difference of nearly 6 percentage points in total yield was found. This represents a 20% increase in annual sales from the same quantity of raw material, in addition to increase in yield, the conditions for the best process included a retort process time 65% shorter than that for the worst process, this difference in yield could have significant economic impact, important to the mussel canning industry. © 2012 Institute of Food Technologists®
Guide for Oxygen Component Qualification Tests
NASA Technical Reports Server (NTRS)
Bamford, Larry J.; Rucker, Michelle A.; Dobbin, Douglas
1996-01-01
Although oxygen is a chemically stable element, it is not shock sensitive, will not decompose, and is not flammable. Oxygen use therefore carries a risk that should never be overlooked, because oxygen is a strong oxidizer that vigorously supports combustion. Safety is of primary concern in oxygen service. To promote safety in oxygen systems, the flammability of materials used in them should be analyzed. At the NASA White Sands Test Facility (WSTF), we have performed configurational tests of components specifically engineered for oxygen service. These tests follow a detailed WSTF oxygen hazards analysis. The stated objective of the tests was to provide performance test data for customer use as part of a qualification plan for a particular component in a particular configuration, and under worst-case conditions. In this document - the 'Guide for Oxygen Component Qualification Tests' - we outline recommended test systems, and cleaning, handling, and test procedures that address worst-case conditions. It should be noted that test results apply specifically to: manual valves, remotely operated valves, check valves, relief valves, filters, regulators, flexible hoses, and intensifiers. Component systems are not covered.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-03-01
The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America
Locke, Sarah J; Deziel, Nicole C; Koh, Dong-Hee; Graubard, Barry I; Purdue, Mark P; Friesen, Melissa C
2017-02-01
We evaluated predictors of differences in published occupational lead concentrations for activities disturbing material painted with or containing lead in U.S. workplaces to aid historical exposure reconstruction. For the aforementioned tasks, 221 air and 113 blood lead summary results (1960-2010) were extracted from a previously developed database. Differences in the natural log-transformed geometric mean (GM) for year, industry, job, and other ancillary variables were evaluated in meta-regression models that weighted each summary result by its inverse variance and sample size. Air and blood lead GMs declined 5%/year and 6%/year, respectively, in most industries. Exposure contrast in the GMs across the nine jobs and five industries was higher based on air versus blood concentrations. For welding activities, blood lead GMs were 1.7 times higher in worst-case versus non-worst case scenarios. Job, industry, and time-specific exposure differences were identified; other determinants were too sparse or collinear to characterize. Am. J. Ind. Med. 60:189-197, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A radiation briefer's guide to the PIKE Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steadman, Jr, C R
1990-03-01
Gamma-radiation-exposure estimates to populations living immediately downwind from the Nevada Test Site have been required for many years by the US Department of Energy (DOE) before each containment-designed nuclear detonation. A highly unlikely worst-case'' scenario is utilized which assumes that there will be an accidental massive venting of radioactive debris into the atmosphere shortly after detonation. The Weather Service Nuclear Support Office (WSNSO) has supplied DOE with such estimates for the last 25 years using the WSNSO Fallout Scaling Technique (FOST), which employs a worst-case analog event that actually occurred in the past. The PIKE Model'' is the application ofmore » the FOST using the PIKE nuclear event as the analog. This report, which is primarily intended for WSNSO meteorologists who derive radiation estimates, gives a brief history of the model,'' presents the mathematical, radiological, and meteorological concepts upon which it is based, states its limitations, explains it apparent advantages over more sophisticated models, and details how it is used operationally. 10 refs., 31 figs., 7 tabs.« less
Housing for the "Worst of the Worst" Inmates: Public Support for Supermax Prisons
ERIC Educational Resources Information Center
Mears, Daniel P.; Mancini, Christina; Beaver, Kevin M.; Gertz, Marc
2013-01-01
Despite concerns whether supermaximum security prisons violate human rights or prove effective, these facilities have proliferated in America over the past 25 years. This punishment--aimed at the "worst of the worst" inmates and involving 23-hr-per-day single-cell confinement with few privileges or services--has emerged despite little…
Hydraulic Fracturing of Soils; A Literature Review.
1977-03-01
best case, or worst case. The study reported herein is an overview of one such test or technique, hydraulic fracturing , which is defined as the...formation of cracks, in soil by the application of hydraulic pressure greater than the minor principal stress at that point. Hydraulic fracturing , as a... hydraulic fracturing as a means for determination of lateral stresses, the technique can still be used for determining in situ total stress and permeability at a point in a cohesive soil.
Liu, Yanjun; Liu, Yanting; Li, Hao; Fu, Xindi; Guo, Hanwen; Meng, Ruihong; Lu, Wenjing; Zhao, Ming; Wang, Hongtao
2016-12-01
Aromatic compounds (ACs) emitted from landfills have attracted a lot of attention of the public due to their adverse impacts on the environment and human health. This study assessed the health risk impacts of the fugitive ACs emitted from the working face of a municipal solid waste (MSW) landfill in China. The emission data was acquired by long-term in-situ samplings using a modified wind tunnel system. The uncertainty of aromatic emissions is determined by means of statistics and the emission factors were thus developed. Two scenarios, i.e. 'normal-case' and 'worst-case', were presented to evaluate the potential health risk in different weather conditions. For this typical large anaerobic landfill, toluene was the dominant species owing to its highest releasing rate (3.40±3.79g·m -2 ·d -1 ). Despite being of negligible non-carcinogenic risk, the ACs might bring carcinogenic risks to human in the nearby area. Ethylbenzene was the major health threat substance. The cumulative carcinogenic risk impact area is as far as ~1.5km at downwind direction for the normal-case scenario, and even nearly 4km for the worst-case scenario. Health risks of fugitive ACs emissions from active landfills should be concerned, especially for landfills which still receiving mixed MSW. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kruser, Jacqueline M; Nabozny, Michael J; Steffens, Nicole M; Brasel, Karen J; Campbell, Toby C; Gaines, Martha E; Schwarze, Margaret L
2015-09-01
To evaluate a communication tool called "Best Case/Worst Case" (BC/WC) based on an established conceptual model of shared decision-making. Focus group study. Older adults (four focus groups) and surgeons (two focus groups) using modified questions from the Decision Aid Acceptability Scale and the Decisional Conflict Scale to evaluate and revise the communication tool. Individuals aged 60 and older recruited from senior centers (n = 37) and surgeons from academic and private practices in Wisconsin (n = 17). Qualitative content analysis was used to explore themes and concepts that focus group respondents identified. Seniors and surgeons praised the tool for the unambiguous illustration of multiple treatment options and the clarity gained from presentation of an array of treatment outcomes. Participants noted that the tool provides an opportunity for in-the-moment, preference-based deliberation about options and a platform for further discussion with other clinicians and loved ones. Older adults worried that the format of the tool was not universally accessible for people with different educational backgrounds, and surgeons had concerns that the tool was vulnerable to physicians' subjective biases. The BC/WC tool is a novel decision support intervention that may help facilitate difficult decision-making for older adults and their physicians when considering invasive, acute medical treatments such as surgery. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.
Anatomy of emotion: a 3D study of facial mimicry.
Ferrario, V F; Sforza, C
2007-01-01
Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.
Primary Spinal Cord Melanoma: A Case Report and a Systemic Review of Overall Survival.
Zhang, Mingzhe; Liu, Raynald; Xiang, Yi; Mao, Jianhui; Li, Guangjie; Ma, Ronghua; Sun, Zhaosheng
2018-06-01
The incidence of primary spinal cord melanoma (PSCM) is rare. Several case series and case reports have been published in the literature. However, the predictive factors of PSCM survival and management options are not discussed in detail. We present a case of PSCM; total resection was achieved and chemotherapy was given postoperatively. A comprehensive search was performed on PubMed's electronic database using the words "primary spinal cord melanoma." Survival rates with various gender, location, treatment, and metastasis condition were collected from the published articles and analyzed. Fifty nine cases were eligible for the survival analysis; 54% were male and 46% were female. Patient sex did not influence overall survival. The most common location was the thorax. Patient sex and tumor location did not influence overall survival. The major presenting symptoms were weakness and paresthesia of the extremities. Metastasis or dissemination was noted in 45.16% of 31 patients. In the Kaplan-Meier survival analysis, patients who had metastasis had the worst prognosis. Extent of resection was not related to mortality. Patients who received surgery and surgery with adjuvant therapy had a better median survival than did those who had adjuvant therapy alone. Prognosis was worst in those patients who underwent only adjuvant therapy without surgery (5 months). Surgery is the first treatment of choice in treating PSCM. The goal of tumor resection is to reduce symptoms. Adjuvant therapy after surgery had a beneficial effect on limiting the metastasis. Copyright © 2018 Elsevier Inc. All rights reserved.
Wave front sensing for next generation earth observation telescope
NASA Astrophysics Data System (ADS)
Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.
2017-09-01
High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
NASA Astrophysics Data System (ADS)
Potamias, Dimitrios; Alxneit, Ivo; Wokaun, Alexander
2017-09-01
The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.
Potamias, Dimitrios; Alxneit, Ivo; Wokaun, Alexander
2017-09-01
The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.
NASA Astrophysics Data System (ADS)
Masoumi, Salim; McClusky, Simon; Koulali, Achraf; Tregoning, Paul
2017-04-01
Improper modeling of horizontal tropospheric gradients in GPS analysis induces errors in estimated parameters, with the largest impact on heights and tropospheric zenith delays. The conventional two-axis tilted plane model of horizontal gradients fails to provide an accurate representation of tropospheric gradients under weather conditions with asymmetric horizontal changes of refractivity. A new parametrization of tropospheric gradients whereby an arbitrary number of gradients are estimated as discrete directional wedges is shown via simulations to significantly improve the accuracy of recovered tropospheric zenith delays in asymmetric gradient scenarios. In a case study of an extreme rain event that occurred in September 2002 in southern France, the new directional parametrization is able to isolate the strong gradients in particular azimuths around the GPS stations consistent with the "V" shape spatial pattern of the observed precipitation. In another study of a network of GPS stations in the Sierra Nevada region where highly asymmetric tropospheric gradients are known to exist, the new directional model significantly improves the repeatabilities of the stations in asymmetric gradient situations while causing slightly degraded repeatabilities for the stations in normal symmetric gradient conditions. The average improvement over the entire network is ˜31%, while the improvement for one of the worst affected sites P631 is ˜49% (from 8.5 mm to 4.3 mm) in terms of weighted root-mean-square (WRMS) error and ˜82% (from -1.1 to -0.2) in terms of skewness. At the same station, the use of the directional model changes the estimates of zenith wet delay by 15 mm (˜25%).
TPX: Contractor preliminary design review. Volume 3, Design and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-06-30
Several models have been formed for investigating the maximum electromagnetic loading and magnetic field levels associated with the Tokamak Physics eXperiment (TPX) superconducting Poloidal Field (PF) coils. The analyses have been performed to support the design of the individual fourteen hoop coils forming the PF system. The coils have been sub-divided into three coil systems consisting of the central solenoid (CS), PF5 coils, and the larger radius PF6 and PF7 coils. Various electromagnetic analyses have been performed to determine the electromagnetic loadings that the coils will experience during normal operating conditions, plasma disruptions, and fault conditions. The loadings are presentedmore » as net body forces acting individual coils, spatial variations throughout the coil cross section, and force variations along the path of the conductor due to interactions with the TF coils. Three refined electromagnetic models of the PF coil system that include a turn-by-turn description of the fields and forces during a worst case event are presented in this report. A global model including both the TF and PF system was formed to obtain the force variations along the path of the PF conductors resulting from interactions with the TF currents. In addition to spatial variations, the loadings are further subdivided into time-varying and steady components so that structural fatigue issues can be addressed by designers and analysts. Other electromagnetic design issues such as the impact of the detailed coil designs on field errors are addressed in this report. Coil features that are analyzed include radial transitions via short jogs vs. spiral type windings and the effects of layer-to-layer rotations (i.e clocking) on the field errors.« less
Ilbäck, N-G; Alzin, M; Jahrl, S; Enghardt-Barbieri, H; Busk, L
2003-02-01
Few sweetener intake studies have been performed on the general population and only one study has been specifically designed to investigate diabetics and children. This report describes a Swedish study on the estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin by children (0-15 years) and adult male and female diabetics (types I and II) of various ages (16-90 years). Altogether, 1120 participants were asked to complete a questionnaire about their sweetener intake. The response rate (71%, range 59-78%) was comparable across age and gender groups. The most consumed 'light' foodstuffs were diet soda, cider, fruit syrup, table powder, table tablets, table drops, ice cream, chewing gum, throat lozenges, sweets, yoghurt and vitamin C. The major sources of sweetener intake were beverages and table powder. About 70% of the participants, equally distributed across all age groups, read the manufacturer's specifications of the food products' content. The estimated intakes showed that neither men nor women exceeded the ADI for acesulfame-K; however, using worst-case calculations, high intakes were found in young children (169% of ADI). In general, the aspartame intake was low. Children had the highest estimated (worst case) intake of cyclamate (317% of ADI). Children's estimated intake of saccharin only slightly exceeded the ADI at the 5% level for fruit syrup. Children had an unexpected high intake of tabletop sweeteners, which, in Sweden, is normally based on cyclamate. The study was performed during two winter months when it can be assumed that the intake of sweeteners was lower as compared with during warm, summer months. Thus, the present study probably underestimates the average intake on a yearly basis. However, our worst-case calculations based on maximum permitted levels were performed on each individual sweetener, although exposure is probably relatively evenly distributed among all sweeteners, except for cyclamate containing table sweeteners.
2013-01-01
Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. PMID:23816180
Martin, Olwenn V; Martin, Scholze; Kortenkamp, Andreas
2013-07-01
Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment.