Sample records for worst-case time complexity

  1. Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems

    NASA Astrophysics Data System (ADS)

    Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán

    2016-07-01

    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.

  2. Time Safety Margin: Theory and Practice

    DTIC Science & Technology

    2016-09-01

    Basic Dive Recovery Terminology The Simplest Definition of TSM: Time Safety Margin is the time to directly travel from the worst-case vector to an...Safety Margin (TSM). TSM is defined as the time in seconds to directly travel from the worst case vector (i.e. worst case combination of parameters...invoked by this AFI, base recovery planning and risk management upon the calculated TSM. TSM is the time in seconds to di- rectly travel from the worst case

  3. Learning Search Control Knowledge for Deep Space Network Scheduling

    NASA Technical Reports Server (NTRS)

    Gratch, Jonathan; Chien, Steve; DeJong, Gerald

    1993-01-01

    While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.

  4. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    PubMed

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  5. Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.

  6. Conscious worst case definition for risk assessment, part I: a knowledge mapping approach for defining most critical risk factors in integrative risk management of chemicals and nanomaterials.

    PubMed

    Sørensen, Peter B; Thomsen, Marianne; Assmuth, Timo; Grieger, Khara D; Baun, Anders

    2010-08-15

    This paper helps bridge the gap between scientists and other stakeholders in the areas of human and environmental risk management of chemicals and engineered nanomaterials. This connection is needed due to the evolution of stakeholder awareness and scientific progress related to human and environmental health which involves complex methodological demands on risk management. At the same time, the available scientific knowledge is also becoming more scattered across multiple scientific disciplines. Hence, the understanding of potentially risky situations is increasingly multifaceted, which again challenges risk assessors in terms of giving the 'right' relative priority to the multitude of contributing risk factors. A critical issue is therefore to develop procedures that can identify and evaluate worst case risk conditions which may be input to risk level predictions. Therefore, this paper suggests a conceptual modelling procedure that is able to define appropriate worst case conditions in complex risk management. The result of the analysis is an assembly of system models, denoted the Worst Case Definition (WCD) model, to set up and evaluate the conditions of multi-dimensional risk identification and risk quantification. The model can help optimize risk assessment planning by initial screening level analyses and guiding quantitative assessment in relation to knowledge needs for better decision support concerning environmental and human health protection or risk reduction. The WCD model facilitates the evaluation of fundamental uncertainty using knowledge mapping principles and techniques in a way that can improve a complete uncertainty analysis. Ultimately, the WCD is applicable for describing risk contributing factors in relation to many different types of risk management problems since it transparently and effectively handles assumptions and definitions and allows the integration of different forms of knowledge, thereby supporting the inclusion of multifaceted risk components in cumulative risk management. Copyright 2009 Elsevier B.V. All rights reserved.

  7. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach.

    PubMed

    Zakov, Shay; Tsur, Dekel; Ziv-Ukelson, Michal

    2011-08-18

    RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.

  8. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach

    PubMed Central

    2011-01-01

    Background RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. Results We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. Conclusions The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms. PMID:21851589

  9. Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.

  10. Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 4, Appendix C

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Statistical analysis data is supplied along with write pulse width, read cycle time, write cycle time, and chip enable time data.

  11. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  12. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  13. Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration

    NASA Astrophysics Data System (ADS)

    Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.

    2010-12-01

    The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.

  14. Query Optimization in Distributed Databases.

    DTIC Science & Technology

    1982-10-01

    general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are

  15. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    PubMed

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  16. Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems

    PubMed Central

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133

  17. An SEU resistant 256K SOI SRAM

    NASA Astrophysics Data System (ADS)

    Hite, L. R.; Lu, H.; Houston, T. W.; Hurta, D. S.; Bailey, W. E.

    1992-12-01

    A novel SEU (single event upset) resistant SRAM (static random access memory) cell has been implemented in a 256K SOI (silicon on insulator) SRAM that has attractive performance characteristics over the military temperature range of -55 to +125 C. These include worst-case access time of 40 ns with an active power of only 150 mW at 25 MHz, and a worst-case minimum WRITE pulse width of 20 ns. Measured SEU performance gives an Adams 10 percent worst-case error rate of 3.4 x 10 exp -11 errors/bit-day using the CRUP code with a conservative first-upset LET threshold. Modeling does show that higher bipolar gain than that measured on a sample from the SRAM lot would produce a lower error rate. Measurements show the worst-case supply voltage for SEU to be 5.5 V. Analysis has shown this to be primarily caused by the drain voltage dependence of the beta of the SOI parasitic bipolar transistor. Based on this, SEU experiments with SOI devices should include measurements as a function of supply voltage, rather than the traditional 4.5 V, to determine the worst-case condition.

  18. Robust guaranteed-cost adaptive quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.

    2017-05-01

    Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.

  19. Conceptual modeling for identification of worst case conditions in environmental risk assessment of nanomaterials using nZVI and C60 as case studies.

    PubMed

    Grieger, Khara D; Hansen, Steffen F; Sørensen, Peter B; Baun, Anders

    2011-09-01

    Conducting environmental risk assessment of engineered nanomaterials has been an extremely challenging endeavor thus far. Moreover, recent findings from the nano-risk scientific community indicate that it is unlikely that many of these challenges will be easily resolved in the near future, especially given the vast variety and complexity of nanomaterials and their applications. As an approach to help optimize environmental risk assessments of nanomaterials, we apply the Worst-Case Definition (WCD) model to identify best estimates for worst-case conditions of environmental risks of two case studies which use engineered nanoparticles, namely nZVI in soil and groundwater remediation and C(60) in an engine oil lubricant. Results generated from this analysis may ultimately help prioritize research areas for environmental risk assessments of nZVI and C(60) in these applications as well as demonstrate the use of worst-case conditions to optimize future research efforts for other nanomaterials. Through the application of the WCD model, we find that the most probable worst-case conditions for both case studies include i) active uptake mechanisms, ii) accumulation in organisms, iii) ecotoxicological response mechanisms such as reactive oxygen species (ROS) production and cell membrane damage or disruption, iv) surface properties of nZVI and C(60), and v) acute exposure tolerance of organisms. Additional estimates of worst-case conditions for C(60) also include the physical location of C(60) in the environment from surface run-off, cellular exposure routes for heterotrophic organisms, and the presence of light to amplify adverse effects. Based on results of this analysis, we recommend the prioritization of research for the selected applications within the following areas: organism active uptake ability of nZVI and C(60) and ecotoxicological response end-points and response mechanisms including ROS production and cell membrane damage, full nanomaterial characterization taking into account detailed information on nanomaterial surface properties, and investigations of dose-response relationships for a variety of organisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Multiple object tracking using the shortest path faster association algorithm.

    PubMed

    Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  1. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    PubMed Central

    Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322

  2. Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 1

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    Electrical characterization and qualification tests were performed on the RCA MWS5001D, 1024 by 1-bit, CMOS, random access memory. Characterization tests were performed on five devices. The tests included functional tests, AC parametric worst case pattern selection test, determination of worst-case transition for setup and hold times and a series of schmoo plots. The qualification tests were performed on 32 devices and included a 2000 hour burn in with electrical tests performed at 0 hours and after 168, 1000, and 2000 hours of burn in. The tests performed included functional tests and AC and DC parametric tests. All of the tests in the characterization phase, with the exception of the worst-case transition test, were performed at ambient temperatures of 25, -55 and 125 C. The worst-case transition test was performed at 25 C. The preburn in electrical tests were performed at 25, -55, and 125 C. All burn in endpoint tests were performed at 25, -40, -55, 85, and 125 C.

  3. Worst case analysis: Earth sensor assembly for the tropical rainfall measuring mission observatory

    NASA Technical Reports Server (NTRS)

    Conley, Michael P.

    1993-01-01

    This worst case analysis verifies that the TRMMESA electronic design is capable of maintaining performance requirements when subjected to worst case circuit conditions. The TRMMESA design is a proven heritage design and capable of withstanding the most worst case and adverse of circuit conditions. Changes made to the baseline DMSP design are relatively minor and do not adversely effect the worst case analysis of the TRMMESA electrical design.

  4. Availability Simulation of AGT Systems

    DOT National Transportation Integrated Search

    1975-02-01

    The report discusses the analytical and simulation procedures that were used to evaluate the effects of failure in a complex dual mode transportation system based on a worst case study-state condition. The computed results are an availability figure ...

  5. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  6. Availability Analysis of Dual Mode Systems

    DOT National Transportation Integrated Search

    1974-04-01

    The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...

  7. Aircraft Loss-of-Control: Analysis and Requirements for Future Safety-Critical Systems and Their Validation

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    2011-01-01

    Loss of control remains one of the largest contributors to fatal aircraft accidents worldwide. Aircraft loss-of-control accidents are complex, resulting from numerous causal and contributing factors acting alone or more often in combination. Hence, there is no single intervention strategy to prevent these accidents. This paper summarizes recent analysis results in identifying worst-case combinations of loss-of-control accident precursors and their time sequences, a holistic approach to preventing loss-of-control accidents in the future, and key requirements for validating the associated technologies.

  8. Fine-Scale Structure Design for 3D Printing

    NASA Astrophysics Data System (ADS)

    Panetta, Francis Julian

    Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.

  9. Multiple usage of the CD PLUS/UNIX system: performance in practice.

    PubMed Central

    Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A

    1995-01-01

    In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable. PMID:8547902

  10. Multiple usage of the CD PLUS/UNIX system: performance in practice.

    PubMed

    Volkers, A C; Tjiam, I A; van Laar, A; Bleeker, A

    1995-10-01

    In August 1994, the CD PLUS/Ovid literature retrieval system based on UNIX was activated for the Faculty of Medicine and Health Sciences of Erasmus University in Rotterdam, the Netherlands. There were up to 1,200 potential users. Tests were carried out to determine the extent to which searching for literature was affected by other end users of the system. In the tests, search times and download times were measured in relation to a varying number of continuously active workstations. Results indicated a linear relationship between search times and the number of active workstations. In the "worst case" situation with sixteen active workstations, the time required for record retrieval increased by a factor of sixteen and downloading time by a factor of sixteen over the "best case" of no other active stations. However, because the worst case seldom, if ever, happens in real life, these results are considered acceptable.

  11. SU-E-T-551: PTV Is the Worst-Case of CTV in Photon Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, D; Liu, W; Park, P

    2014-06-01

    Purpose: To examine the supposition of the static dose cloud and adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution for photon therapy in head and neck (H and N) plans. Methods: Five diverse H and N plans clinically delivered at our institution were selected. Isocenter for each plan was shifted positively and negatively in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (3 mm) for a total of six shifted plans per original plan. The perturbed plan dose was recalculated inmore » Eclipse (AAA v11.0.30) using the same, fixed fluence map as the original plan. The dose distributions for all plans were exported from the treatment planning system to determine the worst-case CTV dose distributions for each nominal plan. Two worst-case distributions, cold and hot, were defined by selecting the minimum or maximum dose per voxel from all the perturbed plans. The resulting dose volume histograms (DVH) were examined to evaluate the worst-case CTV and nominal PTV dose distributions. Results: Inspection demonstrates that the CTV DVH in the nominal dose distribution is indeed bounded by the CTV DVHs in the worst-case dose distributions. Furthermore, comparison of the D95% for the worst-case (cold) CTV and nominal PTV distributions by Pearson's chi-square test shows excellent agreement for all plans. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the five plans under examination. Although the worst-case dose distributions are unphysical since the dose per voxel is chosen independently, the cold worst-case distribution serves as a lower bound for the worst-case possible CTV coverage. Minor discrepancies between the nominal PTV dose distribution and worst-case CTV dose distribution are expected since the dose cloud is not strictly static. This research was supported by the NCI through grant K25CA168984, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, and by the Fraternal Order of Eagles Cancer Research Fund, the Career Development Award Program at Mayo Clinic.« less

  12. Adaptive Attitude Control of the Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Muse, Jonathan

    2010-01-01

    An H(sub infinity)-NMA architecture for the Crew Launch Vehicle was developed in a state feedback setting. The minimal complexity adaptive law was shown to improve base line performance relative to a performance metric based on Crew Launch Vehicle design requirements for all most all of the Worst-on-Worst dispersion cases. The adaptive law was able to maintain stability for some dispersions that are unstable with the nominal control law. Due to the nature of the H(sub infinity)-NMA architecture, the augmented adaptive control signal has low bandwidth which is a great benefit for a manned launch vehicle.

  13. Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios

    NASA Astrophysics Data System (ADS)

    Kluess, D.; Mittelmeier, W.; Bader, R.

    2009-12-01

    In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.

  14. Biomechanical behavior of a cemented ceramic knee replacement under worst case scenarios

    NASA Astrophysics Data System (ADS)

    Kluess, D.; Mittelmeier, W.; Bader, R.

    2010-03-01

    In connection with technological advances in the manufacturing of medical ceramics, a newly developed ceramic femoral component was introduced in total knee arthroplasty (TKA). The motivation to consider ceramics in TKA is based on the allergological and tribological benefits as proven in total hip arthroplasty. Owing to the brittleness and reduced fracture toughness of ceramic materials, the biomechanical performance has to be examined intensely. Apart from standard testing, we calculated the implant performance under different worst case scenarios including malposition, bone defects and stumbling. A finite-element-model was developed to calculate the implant performance in situ. The worst case conditions revealed principal stresses 12.6 times higher during stumbling than during normal gait. Nevertheless, none of the calculated principal stress amounts were above the critical strength of the ceramic material used. The analysis of malposition showed the necessity of exact alignment of the implant components.

  15. Migration of mineral oil from party plates of recycled paperboard into foods: 1. Is recycled paperboard fit for the purpose? 2. Adequate testing procedure.

    PubMed

    Dima, Giovanna; Verzera, Antonella; Grob, Koni

    2011-11-01

    Party plates made of recycled paperboard with a polyolefin film on the food contact surface (more often polypropylene than polyethylene) were tested for migration of mineral oil into various foods applying reasonable worst case conditions. The worst case was identified as a slice of fried meat placed onto the plate while hot and allowed to cool for 1 h. As it caused the acceptable daily intake (ADI) specified by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) to be exceeded, it is concluded that recycled paperboard is generally acceptable for party plates only when separated from the food by a functional barrier. Migration data obtained with oil as simulant at 70°C was compared to the migration into foods. A contact time of 30 min was found to reasonably cover the worst case determined in food.

  16. An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router

    NASA Astrophysics Data System (ADS)

    Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua

    2016-10-01

    Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.

  17. A space-efficient quantum computer simulator suitable for high-speed FPGA implementation

    NASA Astrophysics Data System (ADS)

    Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel

    2009-05-01

    Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.

  18. A Worst-Case Approach for On-Line Flutter Prediction

    NASA Technical Reports Server (NTRS)

    Lind, Rick C.; Brenner, Martin J.

    1998-01-01

    Worst-case flutter margins may be computed for a linear model with respect to a set of uncertainty operators using the structured singular value. This paper considers an on-line implementation to compute these robust margins in a flight test program. Uncertainty descriptions are updated at test points to account for unmodeled time-varying dynamics of the airplane by ensuring the robust model is not invalidated by measured flight data. Robust margins computed with respect to this uncertainty remain conservative to the changing dynamics throughout the flight. A simulation clearly demonstrates this method can improve the efficiency of flight testing by accurately predicting the flutter margin to improve safety while reducing the necessary flight time.

  19. Asteroid Bennu Temperature Maps for OSIRIS-REx Spacecraft and Instrument Thermal Analyses

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.; Emery, Josh; Delbo, Marco

    2014-01-01

    A thermophysical model has been developed to generate asteroid Bennu surface temperature maps for OSIRIS-REx spacecraft and instrument thermal design and analyses at the Critical Design Review (CDR). Two-dimensional temperature maps for worst hot and worst cold cases are used in Thermal Desktop to assure adequate thermal design margins. To minimize the complexity of the Bennu geometry in Thermal Desktop, it is modeled as a sphere instead of the radar shape. The post-CDR updated thermal inertia and a modified approach show that the new surface temperature predictions are more benign. Therefore the CDR Bennu surface temperature predictions are conservative.

  20. Specifying design conservatism: Worst case versus probabilistic analysis

    NASA Technical Reports Server (NTRS)

    Miles, Ralph F., Jr.

    1993-01-01

    Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.

  1. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    NASA Astrophysics Data System (ADS)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  2. 30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 2 2012-07-01 2012-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...

  3. 30 CFR 253.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000 bbls but not more than... must demonstrate OSFR in accordance with the following table: COF worst case oil-spill discharge volume... applicable table in paragraph (b)(1) or (b)(2) for a facility with a potential worst case oil-spill discharge...

  4. 30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 2 2013-07-01 2013-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...

  5. 30 CFR 553.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 2 2014-07-01 2014-07-01 false How do I determine the worst case oil-spill... THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate...

  6. 30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 2 2011-07-01 2011-07-01 false How do I determine the worst case oil-spill... ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To...

  7. 30 CFR 253.14 - How do I determine the worst case oil-spill discharge volume?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How do I determine the worst case oil-spill... INTERIOR OFFSHORE OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 253.14 How do I determine the worst case oil-spill discharge volume? (a) To calculate the amount...

  8. Lower bound for LCD image quality

    NASA Astrophysics Data System (ADS)

    Olson, William P.; Balram, Nikhil

    1996-03-01

    The paper presents an objective lower bound for the discrimination of patterns and fine detail in images on a monochrome LCD. In applications such as medical imaging and military avionics the information of interest is often at the highest frequencies in the image. Since LCDs are sampled data systems, their output modulation is dependent on the phase between the input signal and the sampling points. This phase dependence becomes particularly significant at high spatial frequencies. In order to use an LCD for applications such as those mentioned above it is essential to have a lower (worst case) bound on the performance of the display. We address this problem by providing a mathematical model for the worst case output modulation of an LCD in response to a sine wave input. This function can be interpreted as a worst case modulation transfer function (MTF). The intersection of the worst case MTF with the contrast threshold function (CTF) of the human visual system defines the highest spatial frequency that will always be detectable. In addition to providing the worst case limiting resolution, this MTF is combined with the CTF to produce objective worst case image quality values using the modulation transfer function area (MTFA) metric.

  9. Probabilistic Solar Energetic Particle Models

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.

    2011-01-01

    To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.

  10. A Fully Coupled Multi-Rigid-Body Fuel Slosh Dynamics Model Applied to the Triana Stack

    NASA Technical Reports Server (NTRS)

    London, K. W.

    2001-01-01

    A somewhat general multibody model is presented that accounts for energy dissipation associated with fuel slosh and which unifies some of the existing more specialized representations. This model is used to predict the mutation growth time constant for the Triana Spacecraft, or Stack, consisting of the Triana Observatory mated with the Gyroscopic Upper Stage of GUS (includes the solid rocket motor, SRM, booster). At the nominal spin rate of 60 rpm and with 145 kg of hydrazine propellant on board, a time constant of 116 s is predicted for worst case sloshing of a spherical slug model compared to 1,681 s (nominal), 1,043 s (worst case) for sloshing of a three degree of freedom pendulum model.

  11. Assessing the robustness of passive scattering proton therapy with regard to local recurrence in stage III non-small cell lung cancer: a secondary analysis of a phase II trial.

    PubMed

    Zhu, Zhengfei; Liu, Wei; Gillin, Michael; Gomez, Daniel R; Komaki, Ritsuko; Cox, James D; Mohan, Radhe; Chang, Joe Y

    2014-05-06

    We assessed the robustness of passive scattering proton therapy (PSPT) plans for patients in a phase II trial of PSPT for stage III non-small cell lung cancer (NSCLC) by using the worst-case scenario method, and compared the worst-case dose distributions with the appearance of locally recurrent lesions. Worst-case dose distributions were generated for each of 9 patients who experienced recurrence after concurrent chemotherapy and PSPT to 74 Gy(RBE) for stage III NSCLC by simulating and incorporating uncertainties associated with set-up, respiration-induced organ motion, and proton range in the planning process. The worst-case CT scans were then fused with the positron emission tomography (PET) scans to locate the recurrence. Although the volumes enclosed by the prescription isodose lines in the worst-case dose distributions were consistently smaller than enclosed volumes in the nominal plans, the target dose coverage was not significantly affected: only one patient had a recurrence outside the prescription isodose lines in the worst-case plan. PSPT is a relatively robust technique. Local recurrence was not associated with target underdosage resulting from estimated uncertainties in 8 of 9 cases.

  12. Sensitivity of worst-case strom surge considering influence of climate change

    NASA Astrophysics Data System (ADS)

    Takayabu, Izuru; Hibino, Kenshi; Sasaki, Hidetaka; Shiogama, Hideo; Mori, Nobuhito; Shibutani, Yoko; Takemi, Tetsuya

    2016-04-01

    There are two standpoints when assessing risk caused by climate change. One is how to prevent disaster. For this purpose, we get probabilistic information of meteorological elements, from enough number of ensemble simulations. Another one is to consider disaster mitigation. For this purpose, we have to use very high resolution sophisticated model to represent a worst case event in detail. If we could use enough computer resources to drive many ensemble runs with very high resolution model, we can handle these all themes in one time. However resources are unfortunately limited in most cases, and we have to select the resolution or the number of simulations if we design the experiment. Applying PGWD (Pseudo Global Warming Downscaling) method is one solution to analyze a worst case event in detail. Here we introduce an example to find climate change influence on the worst case storm-surge, by applying PGWD to a super typhoon Haiyan (Takayabu et al, 2015). 1 km grid WRF model could represent both the intensity and structure of a super typhoon. By adopting PGWD method, we can only estimate the influence of climate change on the development process of the Typhoon. Instead, the changes in genesis could not be estimated. Finally, we drove SU-WAT model (which includes shallow water equation model) to get the signal of storm surge height. The result indicates that the height of the storm surge increased up to 20% owing to these 150 years climate change.

  13. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different

  14. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  15. Lifetime Prevalence of Posttraumatic Stress Disorder in Two American Indian Reservation Populations

    PubMed Central

    Beals, Janette; Manson, Spero M.; Croy, Calvin; Klein, Suzell A.; Whitesell, Nancy Rumbaugh; Mitchell, Christina M.

    2015-01-01

    Posttraumatic stress disorder (PTSD) has been found to be more common among American Indian populations than among other Americans. A complex diagnosis, the assessment methods for PTSD have varied across epidemiological studies, especially in terms of the trauma criteria. Here, we examined data from the American Indian Service Utilization, Psychiatric Epidemiology, Risk and Protective Factors Project (AI-SUPERPFP) to estimate the lifetime prevalence of PTSD in two culturally distinct American Indian reservation communities, using two formulas for calculating PTSD prevalence. The AI-SUPERPFP was a cross-sectional probability sample survey conducted between 1997 and 2000. Southwest (n = 1,446) and Northern Plains (n = 1,638) tribal members living on or near their reservations, aged 15–57 years at time of interview, were randomly sampled from tribal rolls. PTSD estimates were derived based on both the single worst and 3 worst traumas. Prevalence estimates varied by ascertainment method: single worst trauma (lifetime: 5.9% to 14.8%) versus 3 worst traumas (lifetime, 8.9% to 19.5%). Use of the 3-worst-event approach increased prevalence by 28.3% over the single-event method. PTSD was prevalent in these tribal communities. These results also serve to underscore the need to better understand the implications for PTSD prevalence with the current focus on a single worst event. PMID:23900893

  16. Acquisition Management for System-of-Systems: Requirement Evolution and Acquisition Strategy Planning

    DTIC Science & Technology

    2012-04-30

    DoD SERC Aeronautics & Astronautics 5/16/2012 NPS 9th Annual Acquisition Research Symposium...0.6 0.7 0.8 0.9 1 0 60 120 180 240 300 360 420 480 540 600 Pr ob ab ili ty to c om pl et e a m is si on Time (mins) architecture 1 architecture 2...1 6 11 /1 6 12 /1 6 13 /1 6 14 /1 6 15 /1 6 1Pr ob ab ili ty to c om pl et e a m is si on % of system failures worst-case in arch1 worst-case in

  17. Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots.

    PubMed

    Busch, Martin H J; Vollmann, Wolfgang; Grönemeyer, Dietrich H W

    2006-05-26

    Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach (1/4) of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q V(ind) < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q V(ind) > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for.

  18. Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots

    PubMed Central

    Busch, Martin HJ; Vollmann, Wolfgang; Grönemeyer, Dietrich HW

    2006-01-01

    Background Active magnetic resonance imaging implants, for example stents, stent grafts or vena cava filters, are constructed as wireless inductively coupled transmit and receive coils. They are built as a resonator tuned to the Larmor frequency of a magnetic resonance system. The resonator can be added to or incorporated within the implant. This technology can counteract the shielding caused by eddy currents inside the metallic implant structure. This may allow getting diagnostic information of the implant lumen (in stent stenosis or thrombosis for example). The electro magnetic rf-pulses during magnetic resonance imaging induce a current in the circuit path of the resonator. A by material fatigue provoked partial rupture of the circuit path or a broken wire with touching surfaces can set up a relatively high resistance on a very short distance, which may behave as a point-like power source, a hot spot, inside the body part the resonator is implanted to. This local power loss inside a small volume can reach ¼ of the total power loss of the intact resonating circuit, which itself is proportional to the product of the resonator volume and the quality factor and depends as well from the orientation of the resonator with respect to the main magnetic field and the imaging sequence the resonator is exposed to. Methods First an analytical solution of a hot spot for thermal equilibrium is described. This analytical solution with a definite hot spot power loss represents the worst case scenario for thermal equilibrium inside a homogeneous medium without cooling effects. Starting with this worst case assumptions additional conditions are considered in a numerical simulation, which are more realistic and may make the results less critical. The analytical solution as well as the numerical simulations use the experimental experience of the maximum hot spot power loss of implanted resonators with a definite volume during magnetic resonance imaging investigations. The finite volume analysis calculates the time developing temperature maps for the model of a broken linear metallic wire embedded in tissue. Half of the total hot spot power loss is assumed to diffuse into both wire parts at the location of a defect. The energy is distributed from there by heat conduction. Additionally the effect of blood perfusion and blood flow is respected in some simulations because the simultaneous appearance of all worst case conditions, especially the absence of blood perfusion and blood flow near the hot spot, is very unlikely for vessel implants. Results The analytical solution as worst case scenario as well as the finite volume analysis for near worst case situations show not negligible volumes with critical temperature increases for part of the modeled hot spot situations. MR investigations with a high rf-pulse density lasting below a minute can establish volumes of several cubic millimeters with temperature increases high enough to start cell destruction. Longer exposure times can involve volumes larger than 100 mm3. Even temperature increases in the range of thermal ablation are reached for substantial volumes. MR sequence exposure time and hot spot power loss are the primary factors influencing the volume with critical temperature increases. Wire radius, wire material as well as the physiological parameters blood perfusion and blood flow inside larger vessels reduce the volume with critical temperature increases, but do not exclude a volume with critical tissue heating for resonators with a large product of resonator volume and quality factor. Conclusion The worst case scenario assumes thermal equilibrium for a hot spot embedded in homogeneous tissue without any cooling due to blood perfusion or flow. The finite volume analysis can calculate the results for near and not close to worst case conditions. For both cases a substantial volume can reach a critical temperature increase in a short time. The analytical solution, as absolute worst case, points out that resonators with a small product of inductance volume and quality factor (Q Vind < 2 cm3) are definitely save. Stents for coronary vessels or resonators used as tracking devices for interventional procedures therefore have no risk of high temperature increases. The finite volume analysis shows for sure that also conditions not close to the worst case reach physiologically critical temperature increases for implants with a large product of inductance volume and quality factor (Q Vind > 10 cm3). Such resonators exclude patients from exactly the MRI investigation these devices are made for. PMID:16729878

  19. Fault detection and initial state verification by linear programming for a class of Petri nets

    NASA Technical Reports Server (NTRS)

    Rachell, Traxon; Meyer, David G.

    1992-01-01

    The authors present an algorithmic approach to determining when the marking of a LSMG (live safe marked graph) or a LSFC (live safe free choice) net is in the set of live safe markings M. Hence, once the marking of a net is determined to be in M, then if at some time thereafter the marking of this net is determined not to be in M, this indicates a fault. It is shown how linear programming can be used to determine if m is an element of M. The worst-case computational complexity of each algorithm is bounded by the number of linear programs necessary to compute.

  20. Management adaptation to fires in the wildland-urban risk areas in Spain

    Treesearch

    Gema Herrero-Corral

    2013-01-01

    Forest fires not only cause damage to ecosystems but also result in major socio-economic losses and in the worst cases loss of human life. Specifically, the incidence of fires in the overlapping areas between building structures and forest vegetation (wildland-urban interface, WUI) generates highly-complex emergencies due to the presence of people and goods....

  1. The Best of Times and the Worst of Times: Research Managed as a Performance Economy--The Australian Case. ASHE Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Marginson, Simon

    This study examined the character of the emerging systems of corporate management in Australian universities and their effects on academic and administrative practices, focusing on relations of power. Case studies were conducted at 17 individual universities of various types. In each institution, interviews were conducted with senior…

  2. Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  3. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures.

    PubMed

    Scheid, Anika; Nebel, Markus E

    2012-07-09

    Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case - without sacrificing much of the accuracy of the results. Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.

  4. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    PubMed Central

    2012-01-01

    Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms. PMID:22776037

  5. A New Efficient Algorithm for the All Sorting Reversals Problem with No Bad Components.

    PubMed

    Wang, Biing-Feng

    2016-01-01

    The problem of finding all reversals that take a permutation one step closer to a target permutation is called the all sorting reversals problem (the ASR problem). For this problem, Siepel had an O(n (3))-time algorithm. Most complications of his algorithm stem from some peculiar structures called bad components. Since bad components are very rare in both real and simulated data, it is practical to study the ASR problem with no bad components. For the ASR problem with no bad components, Swenson et al. gave an O (n(2))-time algorithm. Very recently, Swenson found that their algorithm does not always work. In this paper, a new algorithm is presented for the ASR problem with no bad components. The time complexity is O(n(2)) in the worst case and is linear in the size of input and output in practice.

  6. Cultivating Engineering Ethics and Critical Thinking: A Systematic and Cross-Cultural Education Approach Using Problem-Based Learning

    ERIC Educational Resources Information Center

    Chang, Pei-Fen; Wang, Dau-Chung

    2011-01-01

    In May 2008, the worst earthquake in more than three decades struck southwest China, killing more than 80,000 people. The complexity of this earthquake makes it an ideal case study to clarify the intertwined issues of ethics in engineering and to help cultivate critical thinking skills. This paper first explores the need to encourage engineering…

  7. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI.

    PubMed

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-21

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  8. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI

    NASA Astrophysics Data System (ADS)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-01

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  9. Quantum communication complexity of establishing a shared reference frame.

    PubMed

    Rudolph, Terry; Grover, Lov

    2003-11-21

    We discuss the aligning of spatial reference frames from a quantum communication complexity perspective. This enables us to analyze multiple rounds of communication and give several simple examples demonstrating tradeoffs between the number of rounds and the type of communication. Using a distributed variant of a quantum computational algorithm, we give an explicit protocol for aligning spatial axes via the exchange of spin-1/2 particles which makes no use of either exchanged entangled states, or of joint measurements. This protocol achieves a worst-case fidelity for the problem of "direction finding" that is asymptotically equivalent to the optimal average case fidelity achievable via a single forward communication of entangled states.

  10. 49 CFR 194.5 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...

  11. 49 CFR 194.5 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...

  12. 49 CFR 194.5 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... crosses a major river or other navigable waters, which, because of the velocity of the river flow and vessel traffic on the river, would require a more rapid response in case of a worst case discharge or..., because of its velocity and vessel traffic, would require a more rapid response in case of a worst case...

  13. Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization

    NASA Technical Reports Server (NTRS)

    Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.

    2014-01-01

    Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.

  14. The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.

    PubMed

    Qu, Shaojian; Ji, Ying

    2016-01-01

    In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.

  15. Efficacy and cost-efficacy of biologic therapies for moderate to severe psoriasis: a meta-analysis and cost-efficacy analysis using the intention-to-treat principle.

    PubMed

    Chi, Ching-Chi; Wang, Shu-Hui

    2014-01-01

    Compared to conventional therapies, biologics are more effective but expensive in treating psoriasis. To evaluate the efficacy and cost-efficacy of biologic therapies for psoriasis. We conducted a meta-analysis to calculate the efficacy of etanercept, adalimumab, infliximab, and ustekinumab for at least 75% reduction in the Psoriasis Area and Severity Index score (PASI 75) and Physician's Global Assessment clear/minimal (PGA 0/1). The cost-efficacy was assessed by calculating the incremental cost-effectiveness ratio (ICER) per subject achieving PASI 75 and PGA 0/1. The incremental efficacy regarding PASI 75 was 55% (95% confidence interval (95% CI) 38%-72%), 63% (95% CI 59%-67%), 71% (95% CI 67%-76%), 67% (95% CI 62%-73%), and 72% (95% CI 68%-75%) for etanercept, adalimumab, infliximab, and ustekinumab 45 mg and 90 mg, respectively. The corresponding 6-month ICER regarding PASI 75 was $32,643 (best case $24,936; worst case $47,246), $21,315 (best case $20,043; worst case $22,760), $27,782 (best case $25,954; worst case $29,440), $25,055 (best case $22,996; worst case $27,075), and $46,630 (best case $44,765; worst case $49,373), respectively. The results regarding PGA 0/1 were similar. Infliximab and ustekinumab 90 mg had the highest efficacy. Meanwhile, adalimumab had the best cost-efficacy, followed by ustekinumab 45 mg and infliximab.

  16. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  17. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  18. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  19. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  20. 33 CFR 154.1029 - Worst case discharge.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  1. 30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... associated with the facility. In determining the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your...) For exploratory or development drilling operations, the size of your worst case discharge scenario is...

  2. A new order-theoretic characterisation of the polytime computable functions☆

    PubMed Central

    Avanzini, Martin; Eguchi, Naohi; Moser, Georg

    2015-01-01

    We propose a new order-theoretic characterisation of the class of polytime computable functions. To this avail we define the small polynomial path order (sPOP⁎ for short). This termination order entails a new syntactic method to analyse the innermost runtime complexity of term rewrite systems fully automatically: for any rewrite system compatible with sPOP⁎ that employs recursion up to depth d, the (innermost) runtime complexity is polynomially bounded of degree d. This bound is tight. Thus we obtain a direct correspondence between a syntactic (and easily verifiable) condition of a program and the asymptotic worst-case complexity of the program. PMID:26412933

  3. SU-F-T-192: Study of Robustness Analysis Method of Multiple Field Optimized IMPT Plans for Head & Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Wang, X; Li, H

    Purpose: Proton therapy is more sensitive to uncertainties than photon treatments due to protons’ finite range depending on the tissue density. Worst case scenario (WCS) method originally proposed by Lomax has been adopted in our institute for robustness analysis of IMPT plans. This work demonstrates that WCS method is sufficient enough to take into account of the uncertainties which could be encountered during daily clinical treatment. Methods: A fast and approximate dose calculation method is developed to calculate the dose for the IMPT plan under different setup and range uncertainties. Effects of two factors, inversed square factor and range uncertainty,more » are explored. WCS robustness analysis method was evaluated using this fast dose calculation method. The worst-case dose distribution was generated by shifting isocenter by 3 mm along x,y and z directions and modifying stopping power ratios by ±3.5%. 1000 randomly perturbed cases in proton range and x, yz directions were created and the corresponding dose distributions were calculated using this approximated method. DVH and dosimetric indexes of all 1000 perturbed cases were calculated and compared with the result using worst case scenario method. Results: The distributions of dosimetric indexes of 1000 perturbed cases were generated and compared with the results using worst case scenario. For D95 of CTVs, at least 97% of 1000 perturbed cases show higher values than the one of worst case scenario. For D5 of CTVs, at least 98% of perturbed cases have lower values than worst case scenario. Conclusion: By extensively calculating the dose distributions under random uncertainties, WCS method was verified to be reliable in evaluating the robustness level of MFO IMPT plans of H&N patients. The extensively sampling approach using fast approximated method could be used in evaluating the effects of different factors on the robustness level of IMPT plans in the future.« less

  4. Characteristics of worst hour rainfall rate for radio wave propagation modelling in Nigeria

    NASA Astrophysics Data System (ADS)

    Osita, Ibe; Nymphas, E. F.

    2017-10-01

    Radio waves especially at the millimeter-wave band are known to be attenuated by rain. Radio engineers and designers need to be able to predict the time of the day when radio signal will be attenuated so as to provide measures to mitigate this effect. This is achieved by characterizing the rainfall intensity for a particular region of interest into worst month and worst hour of the day. This paper characterized rainfall in Nigeria into worst year, worst month, and worst hour. It is shown that for the period of study, 2008 and 2009 are the worst years, while September is the most frequent worst month in most of the stations. The evening time (LT) is the worst hours of the day in virtually all the stations.

  5. Multiple Microcomputer Control Algorithm.

    DTIC Science & Technology

    1979-09-01

    discrete and semaphore supervisor calls can be used with tasks in separate processors, in which case they are maintained in shared memory. Operations on ...the source or destination operand specifier of each mode in most cases . However, four of the 16 general register addressing modes and one of the 8 pro...instruction time is based on the specified usage factors and the best cast, and worst case execution times for the instruc- 1I 5 1NAVTRAEQZJ1PCrN M’.V7~j

  6. Effect of pesticide fate parameters and their uncertainty on the selection of 'worst-case' scenarios of pesticide leaching to groundwater.

    PubMed

    Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry

    2011-03-01

    For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.

  7. The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions

    PubMed Central

    Qu, Shaojian; Ji, Ying

    2016-01-01

    In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our “worst-case weighted multi-objective game” model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call “robust-weighted Nash equilibrium”. We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications. PMID:26820512

  8. 30 CFR 254.47 - Determining the volume of oil of your worst case discharge scenario.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the daily discharge rate, you must consider reservoir characteristics, casing/production tubing sizes, and historical production and reservoir pressure data. Your scenario must discuss how to respond to... drilling operations, the size of your worst case discharge scenario is the daily volume possible from an...

  9. Impact of respiratory motion on worst-case scenario optimized intensity modulated proton therapy for lung cancers.

    PubMed

    Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe

    2015-01-01

    We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  10. Fast Inference with Min-Sum Matrix Product.

    PubMed

    Felzenszwalb, Pedro F; McAuley, Julian J

    2011-12-01

    The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.

  11. A Framework to Improve Surgeon Communication in High-Stakes Surgical Decisions: Best Case/Worst Case.

    PubMed

    Taylor, Lauren J; Nabozny, Michael J; Steffens, Nicole M; Tucholka, Jennifer L; Brasel, Karen J; Johnson, Sara K; Zelenski, Amy; Rathouz, Paul J; Zhao, Qianqian; Kwekkeboom, Kristine L; Campbell, Toby C; Schwarze, Margaret L

    2017-06-01

    Although many older adults prefer to avoid burdensome interventions with limited ability to preserve their functional status, aggressive treatments, including surgery, are common near the end of life. Shared decision making is critical to achieve value-concordant treatment decisions and minimize unwanted care. However, communication in the acute inpatient setting is challenging. To evaluate the proof of concept of an intervention to teach surgeons to use the Best Case/Worst Case framework as a strategy to change surgeon communication and promote shared decision making during high-stakes surgical decisions. Our prospective pre-post study was conducted from June 2014 to August 2015, and data were analyzed using a mixed methods approach. The data were drawn from decision-making conversations between 32 older inpatients with an acute nonemergent surgical problem, 30 family members, and 25 surgeons at 1 tertiary care hospital in Madison, Wisconsin. A 2-hour training session to teach each study-enrolled surgeon to use the Best Case/Worst Case communication framework. We scored conversation transcripts using OPTION 5, an observer measure of shared decision making, and used qualitative content analysis to characterize patterns in conversation structure, description of outcomes, and deliberation over treatment alternatives. The study participants were patients aged 68 to 95 years (n = 32), 44% of whom had 5 or more comorbid conditions; family members of patients (n = 30); and surgeons (n = 17). The median OPTION 5 score improved from 41 preintervention (interquartile range, 26-66) to 74 after Best Case/Worst Case training (interquartile range, 60-81). Before training, surgeons described the patient's problem in conjunction with an operative solution, directed deliberation over options, listed discrete procedural risks, and did not integrate preferences into a treatment recommendation. After training, surgeons using Best Case/Worst Case clearly presented a choice between treatments, described a range of postoperative trajectories including functional decline, and involved patients and families in deliberation. Using the Best Case/Worst Case framework changed surgeon communication by shifting the focus of decision-making conversations from an isolated surgical problem to a discussion about treatment alternatives and outcomes. This intervention can help surgeons structure challenging conversations to promote shared decision making in the acute setting.

  12. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 28 2011-07-01 2011-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  13. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 29 2012-07-01 2012-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  14. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 29 2013-07-01 2013-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  15. 40 CFR 300.324 - Response to worst case discharges.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 28 2014-07-01 2014-07-01 false Response to worst case discharges. 300.324 Section 300.324 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION...

  16. Charging and discharging characteristics of dielectric materials exposed to low- and mid-energy electrons

    NASA Technical Reports Server (NTRS)

    Coakley, P.; Kitterer, B.; Treadaway, M.

    1982-01-01

    Charging and discharging characteristics of dielectric samples exposed to 1-25 keV and 25-100 keV electrons in a laboratory environment are reported. The materials examined comprised OSR, Mylar, Kapton, perforated Kapton, and Alphaquartz, serving as models for materials employed on spacecraft in geosynchronous orbit. The tests were performed in a vacuum chamber with electron guns whose beams were rastered over the entire surface of the planar samples. The specimens were examined in low-impedance-grounded, high-impedance-grounded, and isolated configurations. The worst-case and average peak discharge currents were observed to be independent of the incident electron energy, the time-dependent changes in the worst case discharge peak current were independent of the energy, and predischarge surface potentials are negligibly dependent on incident monoenergetic electrons.

  17. Computational algebraic geometry for statistical modeling FY09Q2 progress.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David C.; Rojas, Joseph Maurice; Pebay, Philippe Pierre

    2009-03-01

    This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones inmore » more detail; the next section provides an overview of the project and how the current progress fits into it.« less

  18. Minimizing makespan in a two-stage flow shop with parallel batch-processing machines and re-entrant jobs

    NASA Astrophysics Data System (ADS)

    Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.

    2017-06-01

    Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.

  19. Extrapolating target tracks

    NASA Astrophysics Data System (ADS)

    Van Zandt, James R.

    2012-05-01

    Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.

  20. Efficiency analysis of diffusion on T-fractals in the sense of random walks.

    PubMed

    Peng, Junhao; Xu, Guoai

    2014-04-07

    Efficiently controlling the diffusion process is crucial in the study of diffusion problem in complex systems. In the sense of random walks with a single trap, mean trapping time (MTT) and mean diffusing time (MDT) are good measures of trapping efficiency and diffusion efficiency, respectively. They both vary with the location of the node. In this paper, we analyze the effects of node's location on trapping efficiency and diffusion efficiency of T-fractals measured by MTT and MDT. First, we provide methods to calculate the MTT for any target node and the MDT for any source node of T-fractals. The methods can also be used to calculate the mean first-passage time between any pair of nodes. Then, using the MTT and the MDT as the measure of trapping efficiency and diffusion efficiency, respectively, we compare the trapping efficiency and diffusion efficiency among all nodes of T-fractal and find the best (or worst) trapping sites and the best (or worst) diffusing sites. Our results show that the hub node of T-fractal is the best trapping site, but it is also the worst diffusing site; and that the three boundary nodes are the worst trapping sites, but they are also the best diffusing sites. Comparing the maximum of MTT and MDT with their minimums, we find that the maximum of MTT is almost 6 times of the minimum of MTT and the maximum of MDT is almost equal to the minimum for MDT. Thus, the location of target node has large effect on the trapping efficiency, but the location of source node almost has no effect on diffusion efficiency. We also simulate random walks on T-fractals, whose results are consistent with the derived results.

  1. The impact of the fast ion fluxes and thermal plasma loads on the design of the ITER fast ion loss detector

    NASA Astrophysics Data System (ADS)

    Kocan, M.; Garcia-Munoz, M.; Ayllon-Guerola, J.; Bertalot, L.; Bonnet, Y.; Casal, N.; Galdon, J.; Garcia-Lopez, J.; Giacomin, T.; Gonzalez-Martin, J.; Gunn, J. P.; Rodriguez-Ramos, M.; Reichle, R.; Rivero-Rodriguez, J. F.; Sanchis-Sanchez, L.; Vayakis, G.; Veshchev, E.; Vorpahl, C.; Walsh, M.; Walton, R.

    2017-12-01

    Thermal plasma loads to the ITER Fast Ion Loss Detector are studied for QDT = 10 burning plasma equilibrium using the 3D field line tracing. The simulations are performed for a FILD insertion 9-13 cm past the port plasma facing surface, optimized for fast ion measurements, and include the worst-case perturbation of the plasma boundary and the error in the magnetic reconstruction. The FILD head is exposed to superimposed time-averaged ELM heat load, static inter-ELM heat flux and plasma radiation. The study includes the estimate of the instantaneous temperature rise due to individual 0.6 MJ controlled ELMs. The maximum time-averaged surface heat load is lesssim 12 MW/m2 and will lead to increase of the FILD surface temperature well below the melting temperature of the materials considered here, for the FILD insertion time of 0.2 s. The worst-case instantaneous temperature rise during controlled 0.6 MJ ELMs is also significantly smaller than the melting temperature of e.g. Tungsten or Molybdenum, foreseen for the FILD housing.

  2. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  3. Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects

    ERIC Educational Resources Information Center

    Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor

    2011-01-01

    Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…

  4. 30 CFR 553.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...

  5. 30 CFR 553.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...

  6. 30 CFR 553.13 - How much OSFR must I demonstrate?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... OIL SPILL FINANCIAL RESPONSIBILITY FOR OFFSHORE FACILITIES Applicability and Amount of OSFR § 553.13... the following table: COF worst case oil-spill discharge volume Applicable amount of OSFR Over 1,000... worst case oil-spill discharge of 1,000 bbls or less if the Director notifies you in writing that the...

  7. Walsh Preprocessor.

    DTIC Science & Technology

    1980-08-01

    tile se(q uenw threshold does not utilize thle D)C level inlforiat ion and the time thlresliolditig adaptively adjusts for DC lvel . 𔃻This characteristic...lowest 256/8 = 32 elements. The above observation can be mathematically proven to also relate the fact that the lowest (NT/W) elements can, at worst case

  8. Statistical analysis of environmental monitoring data: does a worst case time for monitoring clean rooms exist?

    PubMed

    Cundell, A M; Bean, R; Massimore, L; Maier, C

    1998-01-01

    To determine the relationship between the sampling time of the environmental monitoring, i.e., viable counts, in aseptic filling areas and the microbial count and frequency of alerts for air, surface and personnel microbial monitoring, statistical analyses were conducted on 1) the frequency of alerts versus the time of day for routine environmental sampling conducted in calendar year 1994, and 2) environmental monitoring data collected at 30-minute intervals during routine aseptic filling operations over two separate days in four different clean rooms with multiple shifts and equipment set-ups at a parenteral manufacturing facility. Statistical analyses showed, except for one floor location that had significantly higher number of counts but no alert or action level samplings in the first two hours of operation, there was no relationship between the number of counts and the time of sampling. Further studies over a 30-day period at the floor location showed no relationship between time of sampling and microbial counts. The conclusion reached in the study was that there is no worst case time for environmental monitoring at that facility and that sampling any time during the aseptic filling operation will give a satisfactory measure of the microbial cleanliness in the clean room during the set-up and aseptic filling operation.

  9. Challenges to validation of a complex nonsterile medical device tray.

    PubMed

    Prince, Daniel; Mastej, Jozef; Hoverman, Isabel; Chatterjee, Raja; Easton, Diana; Behzad, Daniela

    2014-01-01

    Validation by steam sterilization of reusable medical devices requires careful attention to many parameters that directly influence whether or not complete sterilization occurs. Complex implant/instrument tray systems have a variety of configurations and components. Geobacillus stearothermophilus biological indicators (BIs) are used in overkill cycles to to simulate worst case conditions and are intended to provide substantial sterilization assurance. Survival of G. stearothermophilus spores was linked to steam access and size of load in the chamber. By a small and reproducible margin, it was determined that placement of the trays in a rigid container into minimally loaded chambers were more difficult to completely sterilize than maximally loaded chambers.

  10. Parallel transmission pulse design with explicit control for the specific absorption rate in the presence of radiofrequency errors.

    PubMed

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien

    2016-06-01

    A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Enabling Requirements-Based Programming for Highly-Dependable Complex Parallel and Distributed Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.

  12. 41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...

  13. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  14. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  15. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  16. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  17. 40 CFR Appendix D to Part 112 - Determination of a Worst Case Discharge Planning Volume

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Determination of a Worst Case Discharge Planning Volume D Appendix D to Part 112 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS OIL POLLUTION PREVENTION Pt. 112, App. D Appendix D to Part 112—Determination of a...

  18. 41 CFR 102-80.150 - What is meant by “reasonable worst case fire scenario”?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is meant by âreasonable worst case fire scenarioâ? 102-80.150 Section 102-80.150 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80...

  19. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  20. In situ LTE exposure of the general public: Characterization and extrapolation.

    PubMed

    Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc

    2012-09-01

    In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.

  1. The impact of changing dental needs on cost savings from fluoridation.

    PubMed

    Campain, A C; Mariño, R J; Wright, F A C; Harrison, D; Bailey, D L; Morgan, M V

    2010-03-01

    Although community water fluoridation has been one of the cornerstone strategies for the prevention and control of dental caries, questions are still raised regarding its cost-effectiveness. This study assessed the impact of changing dental needs on the cost savings from community water fluoridation in Australia. Net costs were estimated as Costs((programme)) minus Costs((averted caries).) Averted costs were estimated as the product of caries increment in non-fluoridated community, effectiveness of fluoridation and the cost of a carious surface. Modelling considered four age-cohorts: 6-20, 21-45, 46-65 and 66+ years and three time points 1970s, 1980s, and 1990s. Cost of a carious surface was estimated by conventional and complex methods. Real discount rates (4, 7 (base) and 10%) were utilized. With base-case assumptions, the average annual cost savings/person, using Australian dollars at the 2005 level, ranged from $56.41 (1970s) to $17.75 (1990s) (conventional method) and from $249.45 (1970s) to $69.86 (1990s) (complex method). Under worst-case assumptions fluoridation remained cost-effective with cost savings ranging from $24.15 (1970s) to $3.87 (1990s) (conventional method) and $107.85 (1970s) and $24.53 (1990s) (complex method). For 66+ years cohort (1990s) fluoridation did not show a cost saving, but costs/person were marginal. Community water fluoridation remains a cost-effective preventive measure in Australia.

  2. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less

  3. SU-C-19A-07: Influence of Immobilization On Plan Robustness in the Treatment of Head and Neck Cancer with IMPT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bues, M; Anand, A; Liu, W

    2014-06-15

    Purpose: We evaluated the effect of interposing immobilization devices into the beam's path on the robustness of a head and neck plan. Methods: An anthropomorphic head phantom was placed into a preliminary prototype of a specialized head and neck immobilization device for proton beam therapy. The device consists of a hard low density shell, a custom mold insert, and thermoplastic mask to immobilize the patient's head in the shell. This device was provided by CIVCO Medical Solutions for the purpose of evaluation of suitability for proton beam therapy. See Figure 1. Two pairs of treatment plans were generated. The firstmore » plan in each pair was a reference plan including only the anthropomorphic phantom, and the second plan in each pair included the immobilization device. In all other respects the plans within the pair were identical. Results: In the case of the simple plan the degradation of plan robustness was found to be clinically insignificant. In this case, target coverage in the worst case scenario was reduced from 95% of the target volume receiving 96.5% of prescription dose to 95% of the target volume receiving 96.3% of prescription dose by introducing the immobilization device. In the case of the complex plan, target coverage of the boost volume in the worst case scenario was reduced from 95% of the boost target volume receiving 97% of prescription dose to 95% of the boost target volume receiving 83% of prescription dose by introducing the immobilization device. See Figure 2. Conclusion: Immobilization devices may have a deleterious effect on plan robustness. Evaluation of the preliminary prototype revealed a variable impact on the plan robustness depending of the complexity of the case. Brian Morse is an employee of CIVCO Medical Solutions.« less

  4. Updated model assessment of pollution at major U. S. airports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamartino, R.J.; Rote, D.M.

    1979-02-01

    The air quality impact of aircraft at and around Los Angeles International Airport (LAX) was simulated for hours of peak aircraft operation and 'worst case' pollutant dispersion conditions by using an updated version of the Argonne Airport Vicinity Air Pollution model; field programs at LAX, O'Hara, and John F. Kennedy International Airports determined the 'worst case' conditions. Maximum carbon monoxide concentrations at LAX were low relative to National Ambient Air Quality Standards; relatively high and widespread hydrocarbon concentrations indicated that aircraft emissions may aggravate oxidant problems near the airport; nitrogen oxide concentrations were close to the levels set in proposedmore » standards. Data on typical time-in-mode for departing and arriving aircraft, the 8/4/77 diurnal variation in airport activity, and carbon monoxide concentration isopleths are given, and the update factors in the model are discussed.« less

  5. Bristol Ridge: A 28-nm $$\\times$$ 86 Performance-Enhanced Microprocessor Through System Power Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundaram, Sriram; Grenat, Aaron; Naffziger, Samuel

    Power management techniques can be effective at extracting more performance and energy efficiency out of mature systems on chip (SoCs). For instance, the peak performance of microprocessors is often limited by worst case technology (Vmax), infrastructure (thermal/electrical), and microprocessor usage assumptions. Performance/watt of microprocessors also typically suffers from guard bands associated with the test and binning processes as well as worst case aging/lifetime degradation. Similarly, on multicore processors, shared voltage rails tend to limit the peak performance achievable in low thread count workloads. In this paper, we describe five power management techniques that maximize the per-part performance under the before-mentionedmore » constraints. Using these techniques, we demonstrate a net performance increase of up to 15% depending on the application and TDP of the SoC, implemented on 'Bristol Ridge,' a 28-nm CMOS, dual-core x 86 accelerated processing unit.« less

  6. A Diffusion Model Explanation of the Worst Performance Rule for Reaction Time and IQ

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Schmiedek, Florian; McKoon, Gail

    2008-01-01

    The worst performance rule for cognitive tasks [Coyle, T.R. (2003). IQ, the worst performance rule, and Spearman's law: A reanalysis and extension. "Intelligence," 31, 567-587] in which reaction time is measured is the result that IQ scores correlate better with longer (i.e., 0.7 and 0.9 quantile) reaction times than shorter (i.e., 0.1 and 0.3…

  7. Less than severe worst case accidents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, G.A.

    1996-08-01

    Many systems can provide tremendous benefit if operating correctly, produce only an inconvenience if they fail to operate, but have extreme consequences if they are only partially disabled such that they operate erratically or prematurely. In order to assure safety, systems are often tested against the most severe environments and accidents that are considered possible to ensure either safe operation or safe failure. However, it is often the less severe environments which result in the ``worst case accident`` since these are the conditions in which part of the system may be exposed or rendered unpredictable prior to total system failure.more » Some examples of less severe mechanical, thermal, and electrical environments which may actually be worst case are described as cautions for others in industries with high consequence operations or products.« less

  8. Shortening Delivery Times of Intensity Modulated Proton Therapy by Reducing Proton Energy Layers During Treatment Plan Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl; Kooy, Hanne M.; Heijmen, Ben J.M.

    2015-06-01

    Purpose: To shorten delivery times of intensity modulated proton therapy by reducing the number of energy layers in the treatment plan. Methods and Materials: We have developed an energy layer reduction method, which was implemented into our in-house-developed multicriteria treatment planning system “Erasmus-iCycle.” The method consisted of 2 components: (1) minimizing the logarithm of the total spot weight per energy layer; and (2) iteratively excluding low-weighted energy layers. The method was benchmarked by comparing a robust “time-efficient plan” (with energy layer reduction) with a robust “standard clinical plan” (without energy layer reduction) for 5 oropharyngeal cases and 5 prostate cases.more » Both plans of each patient had equal robust plan quality, because the worst-case dose parameters of the standard clinical plan were used as dose constraints for the time-efficient plan. Worst-case robust optimization was performed, accounting for setup errors of 3 mm and range errors of 3% + 1 mm. We evaluated the number of energy layers and the expected delivery time per fraction, assuming 30 seconds per beam direction, 10 ms per spot, and 400 Giga-protons per minute. The energy switching time was varied from 0.1 to 5 seconds. Results: The number of energy layers was on average reduced by 45% (range, 30%-56%) for the oropharyngeal cases and by 28% (range, 25%-32%) for the prostate cases. When assuming 1, 2, or 5 seconds energy switching time, the average delivery time was shortened from 3.9 to 3.0 minutes (25%), 6.0 to 4.2 minutes (32%), or 12.3 to 7.7 minutes (38%) for the oropharyngeal cases, and from 3.4 to 2.9 minutes (16%), 5.2 to 4.2 minutes (20%), or 10.6 to 8.0 minutes (24%) for the prostate cases. Conclusions: Delivery times of intensity modulated proton therapy can be reduced substantially without compromising robust plan quality. Shorter delivery times are likely to reduce treatment uncertainties and costs.« less

  9. A Comparison of Learning Technologies for Teaching Spacecraft Software Development

    ERIC Educational Resources Information Center

    Straub, Jeremy

    2014-01-01

    The development of software for spacecraft represents a particular challenge and is, in many ways, a worst case scenario from a design perspective. Spacecraft software must be "bulletproof" and operate for extended periods of time without user intervention. If the software fails, it cannot be manually serviced. Software failure may…

  10. Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground

    ERIC Educational Resources Information Center

    Oberg, Gunilla

    2009-01-01

    Newcomers often underestimate the challenges of interdisciplinary work and, as a rule, do not spend sufficient time to allow them to overcome differences and create common ground, which in turn leads to frustration, unresolved conflicts, and, in the worst case scenario, discontinued work. The key to successful collaboration is to facilitate the…

  11. Case Study: POLYTECH High School, Woodside, Delaware.

    ERIC Educational Resources Information Center

    Southern Regional Education Board, Atlanta, GA.

    POLYTECH High School in Woodside, Delaware, has gone from being among the worst schools in the High Schools That Work (HSTW) network to among the best. Polytech, which is now a full-time technical high school, has improved its programs and outcomes by implementing a series of organizational, curriculum, teaching, guidance, and leadership changes,…

  12. Evaluating predictors of lead exposure for activities disturbing materials painted with or containing lead using historic published data from U.S. workplaces.

    PubMed

    Locke, Sarah J; Deziel, Nicole C; Koh, Dong-Hee; Graubard, Barry I; Purdue, Mark P; Friesen, Melissa C

    2017-02-01

    We evaluated predictors of differences in published occupational lead concentrations for activities disturbing material painted with or containing lead in U.S. workplaces to aid historical exposure reconstruction. For the aforementioned tasks, 221 air and 113 blood lead summary results (1960-2010) were extracted from a previously developed database. Differences in the natural log-transformed geometric mean (GM) for year, industry, job, and other ancillary variables were evaluated in meta-regression models that weighted each summary result by its inverse variance and sample size. Air and blood lead GMs declined 5%/year and 6%/year, respectively, in most industries. Exposure contrast in the GMs across the nine jobs and five industries was higher based on air versus blood concentrations. For welding activities, blood lead GMs were 1.7 times higher in worst-case versus non-worst case scenarios. Job, industry, and time-specific exposure differences were identified; other determinants were too sparse or collinear to characterize. Am. J. Ind. Med. 60:189-197, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  14. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  15. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    DTIC Science & Technology

    2017-05-08

    hybrid photonic torus, the all-optical Corona crossbar, and the hybrid hierarchical Firefly crossbar. • The key challenges for waveguide photonics...improves SXR but with relatively higher EDP overhead. Our evaluation results indicate that the encoding schemes improve worst-case-SXR in Corona and...photonic crossbar architectures ( Corona and Firefly) indicate that our approach improves worst-case signal-to-noise ratio (SNR) by up to 51.7

  16. A case-control, mono-center, open-label, pilot study to evaluate the feasibility of therapeutic touch in preventing radiation dermatitis in women with breast cancer receiving adjuvant radiation therapy.

    PubMed

    Younus, Jawaid; Lock, Michael; Vujovic, Olga; Yu, Edward; Malec, Jitka; D'Souza, David; Stitt, Larry

    2015-08-01

    Therapeutic touch (TT) is a non-invasive commonly used complementary therapy. TT is based on the use of hand movements and detection of energy field congestion to correct imbalances. Improvement in subjective symptoms in a variety of clinical trials has been seen with TT. The effect of TT during radiotherapy for breast cancer is unknown. Women undergoing adjuvant radiation for Stage I/II breast cancer post conservative surgery were recruited for this cohort study. TT treatments were administered three times per week following radiation therapy. Feasibility was defined as an a priori threshold of 15 of 17 patients completing all TT treatments. The preventive effectiveness of TT was evaluated by documenting the 'time to develop' and the 'worst grade of radiation' dermatitis. Toxicity was assessed using NCIC CTC V3 dermatitis scale. Cosmetic rating was performed using the EORTC Breast Cosmetic Rating. The quality of life, mood and energy, and fatigue were assessed by EORTC QLQ C30, POMS, and BFI, respectively. The parameters were assessed at baseline, and serially during treatment. A total of 49 patients entered the study (17 in the TT Cohort and 32 in the Control Cohort). Median age in TT arm was 63 years and in control arm was 59 years. TT was considered feasible as all 17 patients screened completed TT treatment. There were no side effects observed with the TT treatments. In the TT Cohort, the worst grade of radiation dermatitis was grade II in nine patients (53%). Median time to develop the worst grade was 22 days. In the Control Cohort, the worst grade of radiation dermatitis was grade III in 1 patient. However, the most common toxicity grade was II in 15 patients (47%). Three patients did not develop any dermatitis. Median time to develop the worst grade in the control group was 31 days. There was no difference between cohorts for the overall EORTC cosmetic score and there was no significant difference in before and after study levels in quality of life, mood and fatigue. This study is the first evaluation of TT in patients with breast cancer using objective measures. Although TT is feasible for the management of radiation induced dermatitis, we were not able to detect a significant benefit of TT on NCIC toxicity grade or time to develop the worst grade for radiation dermatitis. In addition, TT did not improve quality of life, mood, fatigue and overall cosmetic outcome. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Modelling of occupational respirable crystalline silica exposure for quantitative exposure assessment in community-based case-control studies.

    PubMed

    Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans

    2011-11-01

    We describe an empirical model for exposure to respirable crystalline silica (RCS) to create a quantitative job-exposure matrix (JEM) for community-based studies. Personal measurements of exposure to RCS from Europe and Canada were obtained for exposure modelling. A mixed-effects model was elaborated, with region/country and job titles as random effect terms. The fixed effect terms included year of measurement, measurement strategy (representative or worst-case), sampling duration (minutes) and a priori exposure intensity rating for each job from an independently developed JEM (none, low, high). 23,640 personal RCS exposure measurements, covering a time period from 1976 to 2009, were available for modelling. The model indicated an overall downward time trend in RCS exposure levels of -6% per year. Exposure levels were higher in the UK and Canada, and lower in Northern Europe and Germany. Worst-case sampling was associated with higher reported exposure levels and an increase in sampling duration was associated with lower reported exposure levels. Highest predicted RCS exposure levels in the reference year (1998) were for chimney bricklayers (geometric mean 0.11 mg m(-3)), monument carvers and other stone cutters and carvers (0.10 mg m(-3)). The resulting model enables us to predict time-, job-, and region/country-specific exposure levels of RCS. These predictions will be used in the SYNERGY study, an ongoing pooled multinational community-based case-control study on lung cancer.

  18. A VLSI implementation of DCT using pass transistor technology

    NASA Technical Reports Server (NTRS)

    Kamath, S.; Lynn, Douglas; Whitaker, Sterling

    1992-01-01

    A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.

  19. Comments on Samal and Henderson: Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swain, M.J.

    Samal and Henderson claim that any parallel algorithm for enforcing arc consistency in the worst case must have {Omega}(na) sequential steps, where n is the number of nodes, and a is the number of labels per node. The authors argue that Samal and Henderon's argument makes assumptions about how processors are used and give a counterexample that enforces arc consistency in a constant number of steps using O(n{sup 2}a{sup 2}2{sup na}) processors. It is possible that the lower bound holds for a polynomial number of processors; if such a lower bound were to be proven it would answer an importantmore » open question in theoretical computer science concerning the relation between the complexity classes P and NC. The strongest existing lower bound for the arc consistency problem states that it cannot be solved in polynomial log time unless P = NC.« less

  20. Method of Generating Transient Equivalent Sink and Test Target Temperatures for Swift BAT

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.

    2004-01-01

    The NASA Swift mission has a 600-km altitude and a 22 degrees maximum inclination. The sun angle varies from 45 degrees to 180 degrees in normal operation. As a result, environmental heat fluxes absorbed by the Burst Alert Telescope (BAT) radiator and loop heat pipe (LHP) compensation chambers (CCs) vary transiently. Therefore the equivalent sink temperatures for the radiator and CCs varies transiently. In thermal performance verification testing in vacuum, the radiator and CCs radiated heat to sink targets. This paper presents an analytical technique for generating orbit transient equivalent sink temperatures and a technique for generating transient sink target temperatures for the radiator and LHP CCs. Using these techniques, transient target temperatures for the radiator and LHP CCs were generated for three thermal environmental cases: worst hot case, worst cold case, and cooldown and warmup between worst hot case in sunlight and worst cold case in the eclipse, and three different heat transport values: 128 W, 255 W, and 382 W. The 128 W case assumed that the two LHPs transport 255 W equally to the radiator. The 255 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator. The 382 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator, and has a 50% design margin. All these transient target temperatures were successfully implemented in the engineering test unit (ETU) LHP and flight LHP thermal performance verification tests in vacuum.

  1. Increased care demand and medical costs after falls in nursing homes: A Delphi study.

    PubMed

    Sterke, Carolyn Shanty; Panneman, Martien J; Erasmus, Vicki; Polinder, Suzanne; van Beeck, Ed F

    2018-04-21

    To estimate the increased care demand and medical costs caused by falls in nursing homes. There is compelling evidence that falls in nursing homes are preventable. However, proper implementation of evidence-based guidelines to prevent falls is often hindered by insufficient management support, staff time and funding. A three-round Delphi study. A panel of 41 experts, all working in nursing homes in the Netherlands, received three online questionnaires to estimate the extra hours of care needed during the first year after the fall. This was estimated for ten falls categories with different levels of injury severity, in three scenarios, that is a best-case, a typical-case and a worst-case scenario. We calculated the costs of falls by multiplying the mean amount of extra hours that the participants spent on the care for a resident after a fall with their hourly wages. In case of a noninjurious fall, the extra time spent on the faller is on average almost 5 hr, expressed in euros that add to € 193. The extra staff time and costs of falls increased with increasing severity of injury. In the case of a fracture of the lower limb, the extra staff time increased to 132 hr, expressed in euros that is € 4,604. In the worst-case scenario of a fracture of the lower limb, the extra staff time increased to 284 hr, expressed in euros that is € 10,170. Falls in nursing homes result in a great deal of extra staff time spent on care, with extra costs varying between € 193 for a noninjurious fall and € 10,170 for serious falls. This study could aid decision-making on investing in appropriate implementation of falls prevention interventions in nursing homes. © 2018 John Wiley & Sons Ltd.

  2. Defense Small Business Innovation Research Program (SBIR), Volume 4, Defense Agencies Abstracts of Phase 1 Awards 1991

    DTIC Science & Technology

    1991-01-01

    EXPERIENCE IN DEVELOPING INTEGRATED OPTICAL DEVICES, NONLINEAR MAGNETIC-OPTIC MATERIALS, HIGH FREQUENCY MODULATORS, COMPUTER-AIDED MODELING AND SOPHISTICATED... HIGH -LEVEL PRESENTATION AND DISTRIBUTED CONTROL MODELS FOR INTEGRATING HETEROGENEOUS MECHANICAL ENGINEERING APPLICATIONS AND TOOLS. THE DESIGN IS FOCUSED...STATISTICALLY ACCURATE WORST CASE DEVICE MODELS FOR CIRCUIT SIMULATION. PRESENT METHODS OF WORST CASE DEVICE DESIGN ARE AD HOC AND DO NOT ALLOW THE

  3. Complexing Agents and pH Influence on Chemical Durability of Type I Molded Glass Containers.

    PubMed

    Biavati, Alberto; Poncini, Michele; Ferrarini, Arianna; Favaro, Nicola; Scarpa, Martina; Vallotto, Marta

    2017-01-01

    Among the factors that affect the glass surface chemical durability, pH and complexing agents present in aqueous solution have the main role. Glass surface attack can be also related to the delamination issue causing glass particles' appearance in the pharmaceutical preparation. A few methods to check for glass containers delamination propensity and some control guidelines have been proposed. The present study emphasizes the possible synergy between a few complexing agents with pH on borosilicate glass chemical durability.Hydrolytic attack was performed in small-volume 23 mL type I glass containers autoclaved according to the European Pharmacopoeia or United States Pharmacopeia for 1 h at 121 °C, in order to enhance the chemical attack due to time, temperature, and the unfavorable surface/volume ratio. Solutions of 0.048 M or 0.024 M (M/L) of the acids citric, glutaric, acetic, EDTA (ethylenediaminetetraacetic acid), together with sodium phosphate with water for comparison, were used for the trials. The pH was adjusted ±0.05 units at fixed values 5.5, 6.6, 7, 7.4, 8, and 9 by LiOH diluted solution.Because silicon is the main glass network former, silicon release into the attack solutions was chosen as the main index of the glass surface attack and analysed by inductively coupled plasma atomic emission spectrophotometry. The work was completed by the analysis of the silicon release in the worst attack conditions of molded glass, soda lime type II glass, and tubing borosilicate glass vials to compare different glass compositions and forming technologies. Surface analysis by scanning electron microscopy was finally performed to check for the surface status after the worst chemical attack condition by citric acid. LAY ABSTRACT: Glass, like every packaging material, can have some usage limits, mainly in basic pH solutions. The issue of glass surface degradation particles that appear in vials (delamination) has forced a number of drug product recalls in recent years. To prevent such situations, pharmaceutical and biopharmaceutical manufacturers need to understand the reasons for accelerate surface glass corrosion mainly in the case of injectables.Some drugs can contain active components with known ability to corrode glass silica networks. Sometimes these ingredients are dissolved in an alkaline medium that dramatically increases the glass corrosion and potentially causes the issue. As this action is strongly affected by time and temperature, flaking may become visible only after a long storage time. The purpose of this investigation is to verify the borosilicate glass chemical durability during controlled conditions of time and temperature when in contact with testing solutions containing different complexing agents by varying the pH. Si concentration in the extract solution is taken as an index of glass dissolution during constant autoclaving conditions for 1 h at 121 °C, which simulates approximately five years of contact at room temperature.Acetate, citrate, ethylenediaminetetraacetic acid (EDTA), phosphate, and glutarate 0.048 M or 0.024 M solutions were used at increasing pH from 5.5 to 9.0. The chemical durability of two borosilicate tubing glass vials of different glass compositions were compared with the molded one in the worst attack conditions by citric acid. Even if no delamination issue has been experienced by this study in type I molded and tubing containers, the conclusions developed can provide pharmaceutical manufacturers with useful information to prevent glass delamination risk in their processes. © PDA, Inc. 2017.

  4. Application of Time-Delay Absorber to Suppress Vibration of a Dynamical System to Tuned Excitation.

    PubMed

    El-Ganaini, W A A; El-Gohary, H A

    2014-08-01

    In this work, we present a comprehensive investigation of the time delay absorber effects on the control of a dynamical system represented by a cantilever beam subjected to tuned excitation forces. Cantilever beam is one of the most widely used system in too many engineering applications, such as mechanical and civil engineering. The main aim of this work is to control the vibration of the beam at simultaneous internal and combined resonance condition, as it is the worst resonance case. Control is conducted via time delay absorber to suppress chaotic vibrations. Time delays often appear in many control systems in the state, in the control input, or in the measurements. Time delay commonly exists in various engineering, biological, and economical systems because of the finite speed of the information processing. It is a source of performance degradation and instability. Multiple time scale perturbation method is applied to obtain a first order approximation for the nonlinear differential equations describing the system behavior. The different resonance cases are reported and studied numerically. The stability of the steady-state solution at the selected worst resonance case is investigated applying Runge-Kutta fourth order method and frequency response equations via Matlab 7.0 and Maple11. Time delay absorber is effective, but within a specified range of time delay. It is the critical factor in selecting such absorber. Time delay absorber is better than the ordinary one as from the effectiveness point of view. The effects of the different absorber parameters on the system behavior and stability are studied numerically. A comparison with the available published work showed a close agreement with some previously published work.

  5. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...

  6. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...

  7. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE OIL-SPILL RESPONSE REQUIREMENTS FOR FACILITIES LOCATED SEAWARD OF THE COAST LINE Oil-Spill Response Plans for Outer Continental Shelf Facilities § 254.26... the facility that oil could move in a time period that it reasonably could be expected to persist in...

  8. 40 CFR 57.405 - Formulation, approval, and implementation of requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... study shall be submitted after the end of the worst case three-month period as a part of the next semi... study demonstrating that the SCS will prevent violations of the NAAQS in the smelter's DLA at all times. The reliability study shall include a comprehensive analysis of the system's operation during one or...

  9. Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-08-01

    We report on a comprehensive all-sky search for periodic gravitational waves in the frequency band 100-1500 Hz and with a frequency time derivative in the range of [-1.18 ,+1.00 ] ×1 0-8 Hz /s . Such a signal could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our galaxy. This search uses the data from the initial LIGO sixth science run and covers a larger parameter space with respect to any past search. A Loosely Coherent detection pipeline was applied to follow up weak outliers in both Gaussian (95% recovery rate) and non-Gaussian (75% recovery rate) bands. No gravitational wave signals were observed, and upper limits were placed on their strength. Our smallest upper limit on worst-case (linearly polarized) strain amplitude h0 is 9.7 ×1 0-25 near 169 Hz, while at the high end of our frequency range we achieve a worst-case upper limit of 5.5 ×1 0-24 . Both cases refer to all sky locations and entire range of frequency derivative values.

  10. Local measles vaccination gaps in Germany and the role of vaccination providers.

    PubMed

    Eichner, Linda; Wjst, Stephanie; Brockmann, Stefan O; Wolfers, Kerstin; Eichner, Martin

    2017-08-14

    Measles elimination in Europe is an urgent public health goal, yet despite the efforts of its member states, vaccination gaps and outbreaks occur. This study explores local vaccination heterogeneity in kindergartens and municipalities of a German county. Data on children from mandatory school enrolment examinations in 2014/15 in Reutlingen county were used. Children with unknown vaccination status were either removed from the analysis (best case) or assumed to be unvaccinated (worst case). Vaccination data were translated into expected outbreak probabilities. Physicians and kindergartens with statistically outstanding numbers of under-vaccinated children were identified. A total of 170 (7.1%) of 2388 children did not provide a vaccination certificate; 88.3% (worst case) or 95.1% (best case) were vaccinated at least once against measles. Based on the worst case vaccination coverage, <10% of municipalities and <20% of kindergartens were sufficiently vaccinated to be protected against outbreaks. Excluding children without a vaccination certificate (best case) leads to over-optimistic views: the overall outbreak probability in case of a measles introduction lies between 39.5% (best case) and 73.0% (worst case). Four paediatricians were identified who accounted for 41 of 109 unvaccinated children and for 47 of 138 incomplete vaccinations; GPs showed significantly higher rates of missing vaccination certificates and unvaccinated or under-vaccinated children than paediatricians. Missing vaccination certificates pose a severe problem regarding the interpretability of vaccination data. Although the coverage for at least one measles vaccination is higher in the studied county than in most South German counties and higher than the European average, many severe and potentially dangerous vaccination gaps occur locally. If other federal German states and EU countries show similar vaccination variability, measles elimination may not succeed in Europe.

  11. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    PubMed

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  12. Worst case estimation of homology design by convex analysis

    NASA Technical Reports Server (NTRS)

    Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.

    1998-01-01

    The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.

  13. The effect of a loss of model structural detail due to network skeletonization on contamination warning system design: case studies.

    PubMed

    Davis, Michael J; Janke, Robert

    2018-01-04

    The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.

  14. The effect of a loss of model structural detail due to network skeletonization on contamination warning system design: case studies

    NASA Astrophysics Data System (ADS)

    Davis, Michael J.; Janke, Robert

    2018-05-01

    The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.

  15. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  16. Full band all-sky search for periodic gravitational waves in the O1 LIGO data

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allen, G.; Allocca, A.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Angelova, S. V.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Atallah, D. V.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Austin, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barkett, K.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bawaj, M.; Bayley, J. C.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Bero, J. J.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Biscoveanu, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonilla, E.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bossie, K.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderón; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerdá-Durán, P.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chase, E.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, X.; Chen, Y.; Cheng, H.-P.; Chia, H. Y.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciecielag, P.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Clearwater, P.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Cohen, D.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Cordero-Carrión, I.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, E. T.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Dálya, G.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Demos, N.; Denker, T.; Dent, T.; De Pietri, R.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; De Rossi, C.; DeSalvo, R.; de Varona, O.; Devenson, J.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorosh, O.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Dreissigacker, C.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dupej, P.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Estevez, D.; Etienne, Z. B.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fee, C.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Finstad, D.; Fiori, I.; Fiorucci, D.; Fishbach, M.; Fisher, R. P.; Fitz-Axen, M.; Flaminio, R.; Fletcher, M.; Fong, H.; Font, J. A.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garcia-Quiros, C.; Garufi, F.; Gateley, B.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; Goncharov, B.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Gretarsson, E. M.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Halim, O.; Hall, B. R.; Hall, E. D.; Hamilton, E. Z.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinderer, T.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hreibi, A.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kamai, B.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, K.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kinley-Hanlon, M.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Knowles, T. D.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Linker, S. D.; Littenberg, T. B.; Liu, J.; Lo, R. K. L.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macas, R.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Markowitz, A.; Maros, E.; Marquina, A.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Mason, K.; Massera, E.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McNeill, L.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, B. B.; Miller, J.; Millhouse, M.; Milovich-Goff, M. C.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moffa, D.; Moggi, A.; Mogushi, K.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muñiz, E. A.; Muratore, M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Neilson, J.; Nelemans, G.; Nelson, T. J. N.; Nery, M.; Neunzert, A.; Nevin, L.; Newport, J. M.; Newton, G.; Ng, K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; North, C.; Nuttall, L. K.; Oberling, J.; O'Dea, G. D.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Okada, M. A.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ossokine, S.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, Howard; Pan, Huang-Wei; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Parida, A.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patil, M.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pirello, M.; Pisarski, A.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Pratten, G.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rajbhandari, B.; Rakhmanov, M.; Ramirez, K. E.; Ramos-Buades, A.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Ren, W.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Rutins, G.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sanchez, L. E.; Sanchis-Gual, N.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheel, M.; Scheuer, J.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shaner, M. B.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, L. P.; Singh, A.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Somala, S.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staats, K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stevenson, S. P.; Stone, R.; Stops, D. J.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Strunk, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Suresh, J.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Tait, S. C.; Talbot, C.; Talukder, D.; Tanner, D. B.; Tao, D.; Tápai, M.; Taracchini, A.; Tasson, J. D.; Taylor, J. A.; Taylor, R.; Tewari, S. V.; Theeg, T.; Thies, F.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torres-Forné, A.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tsukada, L.; Tsuna, D.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, W. H.; Wang, Y. F.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westerweck, J.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Wilken, D.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wysocki, D. M.; Xiao, S.; Yamamoto, H.; Yancey, C. C.; Yang, L.; Yap, M. J.; Yazback, M.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadroźny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2018-05-01

    We report on a new all-sky search for periodic gravitational waves in the frequency band 475-2000 Hz and with a frequency time derivative in the range of [-1.0 ,+0.1 ] ×1 0-8 Hz /s . Potential signals could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our Galaxy. This search uses the data from Advanced LIGO's first observational run O1. No gravitational-wave signals were observed, and upper limits were placed on their strengths. For completeness, results from the separately published low-frequency search 20-475 Hz are included as well. Our lowest upper limit on worst-case (linearly polarized) strain amplitude h0 is ˜4 ×1 0-25 near 170 Hz, while at the high end of our frequency range, we achieve a worst-case upper limit of 1.3 ×1 0-24. For a circularly polarized source (most favorable orientation), the smallest upper limit obtained is ˜1.5 ×1 0-25.

  17. Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift

    NASA Astrophysics Data System (ADS)

    Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.

    2012-01-01

    This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.

  18. Mad cows and computer models: the U.S. response to BSE.

    PubMed

    Ackerman, Frank; Johnecheck, Wendy A

    2008-01-01

    The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.

  19. A Risk-Based Approach to Variable Load Configuration Validation in Steam Sterilization: Application of PDA Technical Report 1 Load Equivalence Topic.

    PubMed

    Pavell, Anthony; Hughes, Keith A

    2010-01-01

    This article describes a method for achieving the load equivalence model, described in Parenteral Drug Association Technical Report 1, using a mass-based approach. The item and load bracketing approach allows for mixed equipment load size variation for operational flexibility along with decreased time to introduce new items to the operation. The article discusses the utilization of approximately 67 items/components (Table IV) identified for routine sterilization with varying quantities required weekly. The items were assessed for worst-case identification using four temperature-related criteria. The criteria were used to provide a data-based identification of worst-case items, and/or item equivalence, to carry forward into cycle validation using a variable load pattern. The mass approach to maximum load determination was used to bracket routine production use and allows for variable loading patterns. The result of the item mapping and load bracketing data is "a proven acceptable range" of sterilizing conditions including loading configuration and location. The application of these approaches, while initially more time/test-intensive than alternate approaches, provides a method of cycle validation with long-term benefit of ease of ongoing qualification, minimizing time and requirements for new equipment qualification for similar loads/use, and for rapid and rigorous assessment of new items for sterilization.

  20. Cyber-security Considerations for Real-Time Physiological Status Monitoring: Threats, Goals, and Use Cases

    DTIC Science & Technology

    2016-11-01

    low- power RF transmissions used by the OBAN system. B. Threat Analysis Methodology To analyze the risk presented by a particular threat we use a... power efficiency5 and in the absolute worst case a compromise of the wireless channel could result in death. Fitness trackers on the other hand are...analysis is intended to inform the development of secure RT-PSM architectures. I. INTRODUCTION The development of very low- power computing devices and

  1. Considering the worst-case metabolic scenario, but training to the typical-case competitive scenario: response to Amtmann (2012).

    PubMed

    Del Vecchio, Fabrício Boscolo; Franchini, Emerson

    2013-08-01

    This response to Amtmann's letter emphasizes that the knowledge of the typical time structure, as well as its variation, together with the main goal of the mixed martial arts athletes--to win by knock out or submission--need to be properly considered during the training sessions. Example with other combat sports are given and discussed, especially concerning the importance of adapting the physical conditioning workouts to the technical-tactical profile of the athlete and not the opposite.

  2. The reduction of a ""safety catastrophic'' potential hazard: A case history

    NASA Technical Reports Server (NTRS)

    Jones, J. P.

    1971-01-01

    A worst case analysis is reported on the safety of time watch movements for triggering explosive packages on the lunar surface in an experiment to investigate physical lunar structural characteristics through induced seismic energy waves. Considered are the combined effects of low pressure, low temperature, lunar gravity, gear train error, and position. Control measures constitute a seal control cavity and design requirements to prevent overbanking in the mainspring torque curve. Thus, the potential hazard is reduced to safety negligible.

  3. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Modelling the Growth of Swine Flu

    ERIC Educational Resources Information Center

    Thomson, Ian

    2010-01-01

    The spread of swine flu has been a cause of great concern globally. With no vaccine developed as yet, (at time of writing in July 2009) and given the fact that modern-day humans can travel speedily across the world, there are fears that this disease may spread out of control. The worst-case scenario would be one of unfettered exponential growth.…

  5. New Algorithms and Lower Bounds for Sequential-Access Data Compression

    NASA Astrophysics Data System (ADS)

    Gagie, Travis

    2009-02-01

    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.

  6. Probabilistic Models for Solar Particle Events

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Dietrich, W. F.; Xapsos, M. A.; Welton, A. M.

    2009-01-01

    Probabilistic Models of Solar Particle Events (SPEs) are used in space mission design studies to provide a description of the worst-case radiation environment that the mission must be designed to tolerate.The models determine the worst-case environment using a description of the mission and a user-specified confidence level that the provided environment will not be exceeded. This poster will focus on completing the existing suite of models by developing models for peak flux and event-integrated fluence elemental spectra for the Z>2 elements. It will also discuss methods to take into account uncertainties in the data base and the uncertainties resulting from the limited number of solar particle events in the database. These new probabilistic models are based on an extensive survey of SPE measurements of peak and event-integrated elemental differential energy spectra. Attempts are made to fit the measured spectra with eight different published models. The model giving the best fit to each spectrum is chosen and used to represent that spectrum for any energy in the energy range covered by the measurements. The set of all such spectral representations for each element is then used to determine the worst case spectrum as a function of confidence level. The spectral representation that best fits these worst case spectra is found and its dependence on confidence level is parameterized. This procedure creates probabilistic models for the peak and event-integrated spectra.

  7. A Multidimensional Assessment of Children in Conflictual Contexts: The Case of Kenya

    ERIC Educational Resources Information Center

    Okech, Jane E. Atieno

    2012-01-01

    Children in Kenya's Kisumu District Primary Schools (N = 430) completed three measures of trauma. Respondents completed the "My Worst Experience Scale" (MWES; Hyman and Snook 2002) and its supplement, the "School Alienation and Trauma Survey" (SATS; Hyman and Snook 2002), sharing their worst experiences overall and specifically…

  8. Worst theft losses are for Mercedes model; 2 of 3 worst are for acura

    DOT National Transportation Integrated Search

    2000-05-13

    The Highway Loss Data Institute's annual list of vehicles with worst theft losses has Mercedes S class heading the list of passenger vehicles with the highest insurance losses for theft. Overall losses for this car are 10 times higher than the averag...

  9. DVD-COOP: Innovative Conjunction Prediction Using Voronoi-filter based on the Dynamic Voronoi Diagram of 3D Spheres

    NASA Astrophysics Data System (ADS)

    Cha, J.; Ryu, J.; Lee, M.; Song, C.; Cho, Y.; Schumacher, P.; Mah, M.; Kim, D.

    Conjunction prediction is one of the critical operations in space situational awareness (SSA). For geospace objects, common algorithms for conjunction prediction are usually based on all-pairwise check, spatial hash, or kd-tree. Computational load is usually reduced through some filters. However, there exists a good chance of missing potential collisions between space objects. We present a novel algorithm which both guarantees no missing conjunction and is efficient to answer to a variety of spatial queries including pairwise conjunction prediction. The algorithm takes only O(k log N) time for N objects in the worst case to answer conjunctions where k is a constant which is linear to prediction time length. The proposed algorithm, named DVD-COOP (Dynamic Voronoi Diagram-based Conjunctive Orbital Object Predictor), is based on the dynamic Voronoi diagram of moving spherical balls in 3D space. The algorithm has a preprocessing which consists of two steps: The construction of an initial Voronoi diagram (taking O(N) time on average) and the construction of a priority queue for the events of topology changes in the Voronoi diagram (taking O(N log N) time in the worst case). The scalability of the proposed algorithm is also discussed. We hope that the proposed Voronoi-approach will change the computational paradigm in spatial reasoning among space objects.

  10. Worst-error analysis of batch filter and sequential filter in navigation problems. [in spacecraft trajectory estimation

    NASA Technical Reports Server (NTRS)

    Nishimura, T.

    1975-01-01

    This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.

  11. Aircraft Loss-of-Control Accident Analysis

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Foster, John V.

    2010-01-01

    Loss of control remains one of the largest contributors to fatal aircraft accidents worldwide. Aircraft loss-of-control accidents are complex in that they can result from numerous causal and contributing factors acting alone or (more often) in combination. Hence, there is no single intervention strategy to prevent these accidents. To gain a better understanding into aircraft loss-of-control events and possible intervention strategies, this paper presents a detailed analysis of loss-of-control accident data (predominantly from Part 121), including worst case combinations of causal and contributing factors and their sequencing. Future potential risks are also considered.

  12. Experimental measurement of preferences in health and healthcare using best-worst scaling: an overview.

    PubMed

    Mühlbacher, Axel C; Kaczynski, Anika; Zweifel, Peter; Johnson, F Reed

    2016-12-01

    Best-worst scaling (BWS), also known as maximum-difference scaling, is a multiattribute approach to measuring preferences. BWS aims at the analysis of preferences regarding a set of attributes, their levels or alternatives. It is a stated-preference method based on the assumption that respondents are capable of making judgments regarding the best and the worst (or the most and least important, respectively) out of three or more elements of a choice-set. As is true of discrete choice experiments (DCE) generally, BWS avoids the known weaknesses of rating and ranking scales while holding the promise of generating additional information by making respondents choose twice, namely the best as well as the worst criteria. A systematic literature review found 53 BWS applications in health and healthcare. This article expounds possibilities of application, the underlying theoretical concepts and the implementation of BWS in its three variants: 'object case', 'profile case', 'multiprofile case'. This paper contains a survey of BWS methods and revolves around study design, experimental design, and data analysis. Moreover the article discusses the strengths and weaknesses of the three types of BWS distinguished and offered an outlook. A companion paper focuses on special issues of theory and statistical inference confronting BWS in preference measurement.

  13. Integration of models of various types of aquifers for water quality management in the transboundary area of the Soča/Isonzo river basin (Slovenia/Italy).

    PubMed

    Vižintin, Goran; Ravbar, Nataša; Janež, Jože; Koren, Eva; Janež, Naško; Zini, Luca; Treu, Francesco; Petrič, Metka

    2018-04-01

    Due to intrinsic characteristics of aquifers groundwater frequently passes between various types of aquifers without hindrance. The complex connection of underground water paths enables flow regardless of administrative boundaries. This can cause problems in water resources management. Numerical modelling is an important tool for the understanding, interpretation and management of aquifers. Useful and reliable methods of numerical modelling differ with regard to the type of aquifer, but their connections in a single hydrodynamic model are rare. The purpose of this study was to connect different models into an integrated system that enables determination of water travel time from the point of contamination to water sources. The worst-case scenario is considered. The system was applied in the Soča/Isonzo basin, a transboundary river in Slovenia and Italy, where there is a complex contact of karst and intergranular aquifers and surface flows over bedrock with low permeability. Time cell models were first elaborated separately for individual hydrogeological units. These were the result of numerical hydrological modelling (intergranular aquifer and surface flow) or complex GIS analysis taking into account the vulnerability map and tracer tests results (karst aquifer). The obtained cellular models present the basis of a contamination early-warning system, since it allows an estimation when contaminants can be expected to appear, and in which water sources. The system proves that the contaminants spread rapidly through karst aquifers and via surface flows, and more slowly through intergranular aquifers. For this reason, karst water sources are more at risk from one-off contamination incidents, while water sources in intergranular aquifers are more at risk in cases of long-term contamination. The system that has been developed is the basis for a single system of protection, action and quality monitoring in the areas of complex aquifer systems within or on the borders of administrative units. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Effect of Impact Location on the Response of Shuttle Wing Leading Edge Panel 9

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Spellman, Regina L.; Hardy, Robin C.; Fasanella, Edwin L.; Jackson, Karen E.

    2005-01-01

    The objective of this paper is to compare the results of several simulations performed to determine the worst-case location for a foam impact on the Space Shuttle wing leading edge. The simulations were performed using the commercial non-linear transient dynamic finite element code, LS-DYNA. These simulations represent the first in a series of parametric studies performed to support the selection of the worst-case impact scenario. Panel 9 was selected for this study to enable comparisons with previous simulations performed during the Columbia Accident Investigation. The projectile for this study is a 5.5-in cube of typical external tank foam weighing 0.23 lb. Seven locations spanning the panel surface were impacted with the foam cube. For each of these cases, the foam was traveling at 1000 ft/s directly aft, along the orbiter X-axis. Results compared from the parametric studies included strains, contact forces, and material energies for various simulations. The results show that the worst case impact location was on the top surface, near the apex.

  15. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses

    PubMed Central

    Faith, Daniel P.

    2015-01-01

    The phylogenetic diversity measure, (‘PD’), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. PMID:25561672

  16. Space Environment Effects: Model for Emission of Solar Protons (ESP): Cumulative and Worst Case Event Fluences

    NASA Technical Reports Server (NTRS)

    Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, E. A.; Gee, G. B.

    1999-01-01

    The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.

  17. Space Environment Effects: Model for Emission of Solar Protons (ESP)--Cumulative and Worst-Case Event Fluences

    NASA Technical Reports Server (NTRS)

    Xapsos, M. A.; Barth, J. L.; Stassinopoulos, E. G.; Burke, Edward A.; Gee, G. B.

    1999-01-01

    The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary and polar orbits and for interplanetary missions. Designers of spacecraft and mission planners are required to assess the performance of microelectronic systems under a variety of conditions. A number of useful approaches exist for predicting information about solar proton event fluences and, to a lesser extent, peak fluxes. This includes the cumulative fluence over the course of a mission, the fluence of a worst-case event during a mission, the frequency distribution of event fluences, and the frequency distribution of large peak fluxes. Naval Research Laboratory (NRL) and NASA Goddard Space Flight Center, under the sponsorship of NASA's Space Environments and Effects (SEE) Program, have developed a new model for predicting cumulative solar proton fluences and worst-case solar proton events as functions of mission duration and user confidence level. This model is called the Emission of Solar Protons (ESP) model.

  18. For wind turbines in complex terrain, the devil is in the detail

    NASA Astrophysics Data System (ADS)

    Lange, Julia; Mann, Jakob; Berg, Jacob; Parvu, Dan; Kilpatrick, Ryan; Costache, Adrian; Chowdhury, Jubayer; Siddiqui, Kamran; Hangan, Horia

    2017-09-01

    The cost of energy produced by onshore wind turbines is among the lowest available; however, onshore wind turbines are often positioned in a complex terrain, where the wind resources and wind conditions are quite uncertain due to the surrounding topography and/or vegetation. In this study, we use a scale model in a three-dimensional wind-testing chamber to show how minor changes in the terrain can result in significant differences in the flow at turbine height. These differences affect not only the power performance but also the life-time and maintenance costs of wind turbines, and hence, the economy and feasibility of wind turbine projects. We find that the mean wind, wind shear and turbulence level are extremely sensitive to the exact details of the terrain: a small modification of the edge of our scale model, results in a reduction of the estimated annual energy production by at least 50% and an increase in the turbulence level by a factor of five in the worst-case scenario with the most unfavorable wind direction. Wind farm developers should be aware that near escarpments destructive flows can occur and their extent is uncertain thus warranting on-site field measurements.

  19. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    2000-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  20. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  1. Physical and composition characteristics of clinical secretions compared with test soils used for validation of flexible endoscope cleaning.

    PubMed

    Alfa, M J; Olson, N

    2016-05-01

    To determine which simulated-use test soils met the worst-case organic levels and viscosity of clinical secretions, and had the best adhesive characteristics. Levels of protein, carbohydrate and haemoglobin, and vibrational viscosity of clinical endoscope secretions were compared with test soils including ATS, ATS2015, Edinburgh, Edinburgh-M (modified), Miles, 10% serum and coagulated whole blood. ASTM D3359 was used for adhesion testing. Cleaning of a single-channel flexible intubation endoscope was tested after simulated use. The worst-case levels of protein, carbohydrate and haemoglobin, and viscosity of clinical material were 219,828μg/mL, 9296μg/mL, 9562μg/mL and 6cP, respectively. Whole blood, ATS2015 and Edinburgh-M were pipettable with viscosities of 3.4cP, 9.0cP and 11.9cP, respectively. ATS2015 and Edinburgh-M best matched the worst-case clinical parameters, but ATS had the best adhesion with 7% removal (36.7% for Edinburgh-M). Edinburgh-M and ATS2015 showed similar soiling and removal characteristics from the surface and lumen of a flexible intubation endoscope. Of the test soils evaluated, ATS2015 and Edinburgh-M were found to be good choices for the simulated use of endoscopes, as their composition and viscosity most closely matched worst-case clinical material. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Level II scour analysis for Bridge 38 (CONCTH00060038) on Town Highway 6, crossing the Moose River, Concord, Vermont

    USGS Publications Warehouse

    Olson, Scott A.

    1996-01-01

    Contraction scour for all modelled flows ranged from 0.1 to 3.1 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour at the left abutment ranged from 10.4 to 12.5 ft with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 25.3 to 27.3 ft with the worst-case occurring at the incipient-overtopping discharge. The worst-case total scour also occurred at the incipient-overtopping discharge. The incipient-overtopping discharge was in between the 100- and 500-year discharges. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  3. Scoring best-worst data in unbalanced many-item designs, with applications to crowdsourcing semantic judgments.

    PubMed

    Hollis, Geoff

    2018-04-01

    Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.

  4. A randomised controlled trial of three or one breathing technique training sessions for breathlessness in people with malignant lung disease.

    PubMed

    Johnson, Miriam J; Kanaan, Mona; Richardson, Gerry; Nabb, Samantha; Torgerson, David; English, Anne; Barton, Rachael; Booth, Sara

    2015-09-07

    About 90 % of patients with intra-thoracic malignancy experience breathlessness. Breathing training is helpful, but it is unknown whether repeated sessions are needed. The present study aims to test whether three sessions are better than one for breathlessness in this population. This is a multi-centre randomised controlled non-blinded parallel arm trial. Participants were allocated to three sessions or single (1:2 ratio) using central computer-generated block randomisation by an independent Trials Unit and stratified for centre. The setting was respiratory, oncology or palliative care clinics at eight UK centres. Inclusion criteria were people with intrathoracic cancer and refractory breathlessness, expected prognosis ≥3 months, and no prior experience of breathing training. The trial intervention was a complex breathlessness intervention (breathing training, anxiety management, relaxation, pacing, and prioritisation) delivered over three hour-long sessions at weekly intervals, or during a single hour-long session. The main primary outcome was worst breathlessness over the previous 24 hours ('worst'), by numerical rating scale (0 = none; 10 = worst imaginable). Our primary analysis was area under the curve (AUC) 'worst' from baseline to 4 weeks. All analyses were by intention to treat. Between April 2011 and October 2013, 156 consenting participants were randomised (52 three; 104 single). Overall, the 'worst' score reduced from 6.81 (SD, 1.89) to 5.84 (2.39). Primary analysis [n = 124 (79 %)], showed no between-arm difference in the AUC: three sessions 22.86 (7.12) vs single session 22.58 (7.10); P value = 0.83); mean difference 0.2, 95 % CIs (-2.31 to 2.97). Complete case analysis showed a non-significant reduction in QALYs with three sessions (mean difference -0.006, 95 % CIs -0.018 to 0.006). Sensitivity analyses found similar results. The probability of the single session being cost-effective (threshold value of £20,000 per QALY) was over 80 %. There was no evidence that three sessions conferred additional benefits, including cost-effectiveness, over one. A single session of breathing training seems appropriate and minimises patient burden. Registry: ISRCTN; ISRCTN49387307; http://www.isrctn.com/ISRCTN49387307 ; registration date: 25/01/2011.

  5. Large-scale modeled contemporary and future water temperature estimates for 10774 Midwestern U.S. Lakes

    USGS Publications Warehouse

    Winslow, Luke A.; Hansen, Gretchen J. A.; Read, Jordan S.; Notaro, Michael

    2017-01-01

    Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979–2015) and future (2020–2040 and 2080–2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes.

  6. Large-scale modeled contemporary and future water temperature estimates for 10774 Midwestern U.S. Lakes

    PubMed Central

    Winslow, Luke A.; Hansen, Gretchen J.A.; Read, Jordan S; Notaro, Michael

    2017-01-01

    Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979–2015) and future (2020–2040 and 2080–2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes. PMID:28440790

  7. Advantages and Disadvantages of 1-Incision, 2-Incision, 3-Incision, and 4-Incision Laparoscopic Cholecystectomy: A Workflow Comparison Study.

    PubMed

    Bartnicka, Joanna; Zietkiewicz, Agnieszka A; Kowalski, Grzegorz J

    2016-08-01

    A comparison of 1-port, 2-port, 3-port, and 4-port laparoscopic cholecystectomy techniques from the point of view of workflow criteria was made to both identify specific workflow components that can cause surgical disturbances and indicate good and bad practices. As a case study, laparoscopic cholecystectomies, including manual tasks and interactions within teamwork members, were video-recorded and analyzed on the basis of specially encoded workflow information. The parameters for comparison were defined as follows: surgery time, tool and hand activeness, operator's passive work, collisions, and operator interventions. It was found that 1-port cholecystectomy is the worst technique because of nonergonomic body position, technical complexity, organizational anomalies, and operational dynamism. The differences between laparoscopic techniques are closely linked to the costs of the medical procedures. Hence, knowledge about the surgical workflow can be used for both planning surgical procedures and balancing the expenses associated with surgery.

  8. Risk-Screening Environmental Indicators (RSEI)

    EPA Pesticide Factsheets

    EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.

  9. Large-scale modeled contemporary and future water temperature estimates for 10774 Midwestern U.S. Lakes

    NASA Astrophysics Data System (ADS)

    Winslow, Luke A.; Hansen, Gretchen J. A.; Read, Jordan S.; Notaro, Michael

    2017-04-01

    Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979-2015) and future (2020-2040 and 2080-2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes.

  10. Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tock, Yoav; Mandler, Benjamin; Moreira, Jose

    2013-01-01

    As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliencymore » and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.« less

  11. Thinking and Doing the Best Things in the Worst Times.

    ERIC Educational Resources Information Center

    Marty, Martin E.

    1994-01-01

    Several danger signals reflect our cultural disarray. Schools reflect larger societal breakdown and absence of common culture to support learning, discourse, conversation, or argument. In the "worst times," best educators preserve whatever transcends mere relativism, promote whatever survives of subcommunities that generate character,…

  12. Securing Sub-Saharan Africa’s Maritime Environment: Lessons Learned from the Caribbean and Southeast Asia

    DTIC Science & Technology

    2009-06-01

    Worst of Times: Maritime Security in the Asia-Pacific eds. Joshua Ho and Catherine Zara Raymond (Singapore: Institute of Defense and Strategic Studies...Security Outlook for Southeast Asia,” in The Best of Times, the Worst of Times: Maritime Security in the Asia-Pacific eds. Joshua Ho and Catherine Zara

  13. EPHECT I: European household survey on domestic use of consumer products and development of worst-case scenarios for daily use.

    PubMed

    Dimitroulopoulou, C; Lucica, E; Johnson, A; Ashmore, M R; Sakellaris, I; Stranger, M; Goelen, E

    2015-12-01

    Consumer products are frequently and regularly used in the domestic environment. Realistic estimates for product use are required for exposure modelling and health risk assessment. This paper provides significant data that can be used as input for such modelling studies. A European survey was conducted, within the framework of the DG Sanco-funded EPHECT project, on the household use of 15 consumer products. These products are all-purpose cleaners, kitchen cleaners, floor cleaners, glass and window cleaners, bathroom cleaners, furniture and floor polish products, combustible air fresheners, spray air fresheners, electric air fresheners, passive air fresheners, coating products for leather and textiles, hair styling products, spray deodorants and perfumes. The analysis of the results from the household survey (1st phase) focused on identifying consumer behaviour patterns (selection criteria, frequency of use, quantities, period of use and ventilation conditions during product use). This can provide valuable input to modelling studies, as this information is not reported in the open literature. The above results were further analysed (2nd phase), to provide the basis for the development of 'most representative worst-case scenarios' regarding the use of the 15 products by home-based population groups (housekeepers and retired people), in four geographical regions in Europe. These scenarios will be used for the exposure and health risk assessment within the EPHECT project. To the best of our knowledge, it is the first time that daily worst-case scenarios are presented in the scientific published literature concerning the use of a wide range of 15 consumer products across Europe. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  14. Ecological risk estimation of organophosphorus pesticides in riverine ecosystems.

    PubMed

    Wee, Sze Yee; Aris, Ahmad Zaharin

    2017-12-01

    Pesticides are of great concern because of their existence in ecosystems at trace concentrations. Worldwide pesticide use and its ecological impacts (i.e., altered environmental distribution and toxicity of pesticides) have increased over time. Exposure and toxicity studies are vital for reducing the extent of pesticide exposure and risk to the environment and humans. Regional regulatory actions may be less relevant in some regions because the contamination and distribution of pesticides vary across regions and countries. The risk quotient (RQ) method was applied to assess the potential risk of organophosphorus pesticides (OPPs), primarily focusing on riverine ecosystems. Using the available ecotoxicity data, aquatic risks from OPPs (diazinon and chlorpyrifos) in the surface water of the Langat River, Selangor, Malaysia were evaluated based on general (RQ m ) and worst-case (RQ ex ) scenarios. Since the ecotoxicity of quinalphos has not been well established, quinalphos was excluded from the risk assessment. The calculated RQs indicate medium risk (RQ m  = 0.17 and RQ ex  = 0.66; 0.1 ≤ RQ < 1) of overall diazinon. The overall chlorpyrifos exposure was observed at high risk (RQ ≥ 1) based on RQ m and RQ ex at 1.44 and 4.83, respectively. A contradictory trend of RQs > 1 (high risk) was observed for both the general and worst cases of chlorpyrifos, but only for the worst cases of diazinon at all sites from downstream to upstream regions. Thus, chlorpyrifos posed a higher risk than diazinon along the Langat River, suggesting that organisms and humans could be exposed to potentially high levels of OPPs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Coherent detection of frequency-hopped quadrature modulations in the presence of jamming. I - QPSK and QASK modulations

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Polydoros, A.

    1981-01-01

    This paper examines the performance of coherent QPSK and QASK systems combined with FH or FH/PN spread spectrum techniques in the presence of partial-band multitone or noise jamming. The worst-case jammer and worst-case performance are determined as functions of the signal-to-background noise ratio (SNR) and signal-to-jammer power ratio (SJR). Asymptotic results for high SNR are shown to have a linear dependence between the jammer's optimal power allocation and the system error probability performance.

  16. Optimal Analyses for 3×n AB Games in the Worst Case

    NASA Astrophysics Data System (ADS)

    Huang, Li-Te; Lin, Shun-Shii

    The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.

  17. The challenge posed to children's health by mixtures of toxic waste: the Tar Creek superfund site as a case-study.

    PubMed

    Hu, Howard; Shine, James; Wright, Robert O

    2007-02-01

    In the United States, many of the millions of tons of hazardous wastes that have been produced since World War II have accumulated in sites throughout the nation. Citizen concern about the extent of this problem led Congress to establish the Superfund Program in 1980 to locate, investigate, and clean up the worst sites nationwide. Most such waste exists as a complex mixture of many substances. This article discusses the issue of toxic mixtures and children's health by focusing on the specific example of mining waste at the Tar Creek Superfund Site in Northeast Oklahoma.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significantmore » funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.« less

  19. "It Was the Best of Times, It Was the Worst of Times …": Philosophy of Education in the Contemporary World

    ERIC Educational Resources Information Center

    Roberts, Peter

    2015-01-01

    This article considers the state of philosophy of education in our current age and assesses prospects for the future of the field. I argue that as philosophers of education, we live in both the best of times and the worst of times. Developments in one key organisation, the Philosophy of Education Society of Australasia, are examined in relation to…

  20. How can health systems research reach the worst-off? A conceptual exploration.

    PubMed

    Pratt, Bridget; Hyder, Adnan A

    2016-11-15

    Health systems research is increasingly being conducted in low and middle-income countries (LMICs). Such research should aim to reduce health disparities between and within countries as a matter of global justice. For such research to do so, ethical guidance that is consistent with egalitarian theories of social justice proposes it ought to (amongst other things) focus on worst-off countries and research populations. Yet who constitutes the worst-off is not well-defined. By applying existing work on disadvantage from political philosophy, the paper demonstrates that (at least) two options exist for how to define the worst-off upon whom equity-oriented health systems research should focus: those who are worst-off in terms of health or those who are systematically disadvantaged. The paper describes in detail how both concepts can be understood and what metrics can be relied upon to identify worst-off countries and research populations at the sub-national level (groups, communities). To demonstrate how each can be used, the paper considers two real-world cases of health systems research and whether their choice of country (Uganda, India) and research population in 2011 would have been classified as amongst the worst-off according to the proposed concepts. The two proposed concepts can classify different countries and sub-national populations as worst-off. It is recommended that health researchers (or other actors) should use the concept that best reflects their moral commitments-namely, to perform research focused on reducing health inequalities or systematic disadvantage more broadly. If addressing the latter, it is recommended that they rely on the multidimensional poverty approach rather than the income approach to identify worst-off populations.

  1. Phylogenetic diversity, functional trait diversity and extinction: avoiding tipping points and worst-case losses.

    PubMed

    Faith, Daniel P

    2015-02-19

    The phylogenetic diversity measure, ('PD'), measures the relative feature diversity of different subsets of taxa from a phylogeny. At the level of feature diversity, PD supports the broad goal of biodiversity conservation to maintain living variation and option values. PD calculations at the level of lineages and features include those integrating probabilities of extinction, providing estimates of expected PD. This approach has known advantages over the evolutionarily distinct and globally endangered (EDGE) methods. Expected PD methods also have limitations. An alternative notion of expected diversity, expected functional trait diversity, relies on an alternative non-phylogenetic model and allows inferences of diversity at the level of functional traits. Expected PD also faces challenges in helping to address phylogenetic tipping points and worst-case PD losses. Expected PD may not choose conservation options that best avoid worst-case losses of long branches from the tree of life. We can expand the range of useful calculations based on expected PD, including methods for identifying phylogenetic key biodiversity areas. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. Optimization of vibratory energy harvesters with stochastic parametric uncertainty: a new perspective

    NASA Astrophysics Data System (ADS)

    Haji Hosseinloo, Ashkan; Turitsyn, Konstantin

    2016-04-01

    Vibration energy harvesting has been shown as a promising power source for many small-scale applications mainly because of the considerable reduction in the energy consumption of the electronics and scalability issues of the conventional batteries. However, energy harvesters may not be as robust as the conventional batteries and their performance could drastically deteriorate in the presence of uncertainty in their parameters. Hence, study of uncertainty propagation and optimization under uncertainty is essential for proper and robust performance of harvesters in practice. While all studies have focused on expectation optimization, we propose a new and more practical optimization perspective; optimization for the worst-case (minimum) power. We formulate the problem in a generic fashion and as a simple example apply it to a linear piezoelectric energy harvester. We study the effect of parametric uncertainty in its natural frequency, load resistance, and electromechanical coupling coefficient on its worst-case power and then optimize for it under different confidence levels. The results show that there is a significant improvement in the worst-case power of thus designed harvester compared to that of a naively-optimized (deterministically-optimized) harvester.

  3. "The Et Tu Brute Complex" Compulsive Self Betrayal

    ERIC Educational Resources Information Center

    Antus, Robert Lawrence

    2006-01-01

    In this article, the author discusses "The Et Tu Brute Complex." More specifically, this phenomenon occurs when a person, instead of supporting and befriending himself, orally condemns himself in front of other people and becomes his own worst enemy. This is a form of compulsive self-hatred. Most often, the victim of this complex is unaware of the…

  4. Numerical modelling of vehicular pollution dispersion: The application of computational fluid dynamics techniques, a case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanderheyden, M.D.; Dajka, S.C.; Sinclair, R.

    1997-12-31

    Numerical modelling of vehicular emissions using the United States Environmental Protection Agency`s CALINE4 and CAL3QHC dispersion models to predict air quality impacts in the vicinity of roadways is a widely accepted means of evaluating vehicular emissions impacts. The numerical models account for atmospheric dispersion in both open or suburban terrains. When assessing roadways in urban areas with numerous large buildings, however, the models are unable to account for the complex airflows and therefore do not provide satisfactory estimates of pollutant concentrations. Either Wind Tunnel Modelling or Computational Fluid Dynamics (CFD) techniques can be used to assess the impact of vehiclemore » emissions in an urban core. This paper presents a case study where CFD is used to predict worst-case air quality impacts for two development configurations: an existing roadway configuration and a proposed configuration with an elevated pedestrian walkway. In assessing these configurations, worst-case meteorology and traffic conditions are modeled to allow for the prediction of pollutant concentrations due to vehicular emissions on two major streets in Hong Kong. The CFD modelling domain is divided up into thousands of control volumes. Each of these control volumes has a central point called a node where velocities, pollutant concentration and other auxiliary variables are calculated. The region of interest, the pedestrian link and its immediate surroundings, has a denser distribution of nodes in order to give a better resolution of local flow details. Separate CFD modelling runs were undertaken for each development configuration for wind direction increments of 15 degrees. For comparison of the development scenarios, pollutant concentrations (carbon monoxide, nitrogen dioxide and particulate matter) are predicted at up to 99 receptor nodes representing sensitive locations.« less

  5. "Best Case/Worst Case": Training Surgeons to Use a Novel Communication Tool for High-Risk Acute Surgical Problems.

    PubMed

    Kruser, Jacqueline M; Taylor, Lauren J; Campbell, Toby C; Zelenski, Amy; Johnson, Sara K; Nabozny, Michael J; Steffens, Nicole M; Tucholka, Jennifer L; Kwekkeboom, Kris L; Schwarze, Margaret L

    2017-04-01

    Older adults often have surgery in the months preceding death, which can initiate postoperative treatments inconsistent with end-of-life values. "Best Case/Worst Case" (BC/WC) is a communication tool designed to promote goal-concordant care during discussions about high-risk surgery. The objective of this study was to evaluate a structured training program designed to teach surgeons how to use BC/WC. Twenty-five surgeons from one tertiary care hospital completed a two-hour training session followed by individual coaching. We audio-recorded surgeons using BC/WC with standardized patients and 20 hospitalized patients. Hospitalized patients and their families participated in an open-ended interview 30 to 120 days after enrollment. We used a checklist of 11 BC/WC elements to measure tool fidelity and surgeons completed the Practitioner Opinion Survey to measure acceptability of the tool. We used qualitative analysis to evaluate variability in tool content and to characterize patient and family perceptions of the tool. Surgeons completed a median of 10 of 11 BC/WC elements with both standardized and hospitalized patients (range 5-11). We found moderate variability in presentation of treatment options and description of outcomes. Three months after training, 79% of surgeons reported BC/WC is better than their usual approach and 71% endorsed active use of BC/WC in clinical practice. Patients and families found that BC/WC established expectations, provided clarity, and facilitated deliberation. Surgeons can learn to use BC/WC with older patients considering acute high-risk surgical interventions. Surgeons, patients, and family members endorse BC/WC as a strategy to support complex decision making. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  6. Severe anaemia associated with Plasmodium falciparum infection in children: consequences for additional blood sampling for research.

    PubMed

    Kuijpers, Laura Maria Francisca; Maltha, Jessica; Guiraud, Issa; Kaboré, Bérenger; Lompo, Palpouguini; Devlieger, Hugo; Van Geet, Chris; Tinto, Halidou; Jacobs, Jan

    2016-06-02

    Plasmodium falciparum infection may cause severe anaemia, particularly in children. When planning a diagnostic study on children suspected of severe malaria in sub-Saharan Africa, it was questioned how much blood could be safely sampled; intended blood volumes (blood cultures and EDTA blood) were 6 mL (children aged <6 years) and 10 mL (6-12 years). A previous review [Bull World Health Organ. 89: 46-53. 2011] recommended not to exceed 3.8 % of total blood volume (TBV). In a simulation exercise using data of children previously enrolled in a study about severe malaria and bacteraemia in Burkina Faso, the impact of this 3.8 % safety guideline was evaluated. For a total of 666 children aged >2 months to <12 years, data of age, weight and haemoglobin value (Hb) were available. For each child, the estimated TBV (TBVe) (mL) was calculated by multiplying the body weight (kg) by the factor 80 (ml/kg). Next, TBVe was corrected for the degree of anaemia to obtain the functional TBV (TBVf). The correction factor consisted of the rate 'Hb of the child divided by the reference Hb'; both the lowest ('best case') and highest ('worst case') reference Hb values were used. Next, the exact volume that a 3.8 % proportion of this TBVf would present was calculated and this volume was compared to the blood volumes that were intended to be sampled. When applied to the Burkina Faso cohort, the simulation exercise pointed out that in 5.3 % (best case) and 11.4 % (worst case) of children the blood volume intended to be sampled would exceed the volume as defined by the 3.8 % safety guideline. Highest proportions would be in the age groups 2-6 months (19.0 %; worst scenario) and 6 months-2 years (15.7 %; worst case scenario). A positive rapid diagnostic test for P. falciparum was associated with an increased risk of violating the safety guideline in the worst case scenario (p = 0.016). Blood sampling in children for research in P. falciparum endemic settings may easily violate the proposed safety guideline when applied to TBVf. Ethical committees and researchers should be wary of this and take appropriate precautions.

  7. Self-Encoded Spread Spectrum Modulation for Robust Anti-Jamming Communication

    DTIC Science & Technology

    2009-06-30

    experience in both theoretical and experimental aspects of RF and optical communications, multi-user CDMA systems, transmitter precoding and code...the performance of DS - and FH-SESS modulation in the presence of worst-case jamming, develop innovative SESS schemes that further exploit time and...Determine BER and AJ performance of the feedback and iterative detectors in DS -SESS under pulsed-noise and multi-tone jamming • Task 2: Develop a scheme

  8. The Worst of Times? A Tale of Two Higher Education Institutions in France: Their Merger and Its Impact on Staff Working Lives

    ERIC Educational Resources Information Center

    Evans, Linda

    2017-01-01

    This paper presents the preliminary findings of a case study of the merger of two higher education institutions in France. The paper's main focus is not the politics that gave rise to the institutional merger, nor the rights or wrongs of the decision, nor the merger process itself; rather, it is the extent to and the ways in which these features…

  9. Monitoring Churn in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Holzer, Stephan; Pignolet, Yvonne Anne; Smula, Jasmin; Wattenhofer, Roger

    Wireless networks often experience a significant amount of churn, the arrival and departure of nodes. In this paper we propose a distributed algorithm for single-hop networks that detects churn and is resilient to a worst-case adversary. The nodes of the network are notified about changes quickly, in asymptotically optimal time up to an additive logarithmic overhead. We establish a trade-off between saving energy and minimizing the delay until notification for single- and multi-channel networks.

  10. Discussions On Worst-Case Test Condition For Single Event Burnout

    NASA Astrophysics Data System (ADS)

    Liu, Sandra; Zafrani, Max; Sherman, Phillip

    2011-10-01

    This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.

  11. Atmospheric transport of radioactive debris to Norway in case of a hypothetical accident related to the recovery of the Russian submarine K-27.

    PubMed

    Bartnicki, Jerzy; Amundsen, Ingar; Brown, Justin; Hosseini, Ali; Hov, Øystein; Haakenstad, Hilde; Klein, Heiko; Lind, Ole Christian; Salbu, Brit; Szacinski Wendel, Cato C; Ytre-Eide, Martin Album

    2016-01-01

    The Russian nuclear submarine K-27 suffered a loss of coolant accident in 1968 and with nuclear fuel in both reactors it was scuttled in 1981 in the outer part of Stepovogo Bay located on the eastern coast of Novaya Zemlya. The inventory of spent nuclear fuel on board the submarine is of concern because it represents a potential source of radioactive contamination of the Kara Sea and a criticality accident with potential for long-range atmospheric transport of radioactive particles cannot be ruled out. To address these concerns and to provide a better basis for evaluating possible radiological impacts of potential releases in case a salvage operation is initiated, we assessed the atmospheric transport of radionuclides and deposition in Norway from a hypothetical criticality accident on board the K-27. To achieve this, a long term (33 years) meteorological database has been prepared and used for selection of the worst case meteorological scenarios for each of three selected locations of the potential accident. Next, the dispersion model SNAP was run with the source term for the worst-case accident scenario and selected meteorological scenarios. The results showed predictions to be very sensitive to the estimation of the source term for the worst-case accident and especially to the sizes and densities of released radioactive particles. The results indicated that a large area of Norway could be affected, but that the deposition in Northern Norway would be considerably higher than in other areas of the country. The simulations showed that deposition from the worst-case scenario of a hypothetical K-27 accident would be at least two orders of magnitude lower than the deposition observed in Norway following the Chernobyl accident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Quantum algorithm for association rules mining

    NASA Astrophysics Data System (ADS)

    Yu, Chao-Hua; Gao, Fei; Wang, Qing-Le; Wen, Qiao-Yan

    2016-10-01

    Association rules mining (ARM) is one of the most important problems in knowledge discovery and data mining. Given a transaction database that has a large number of transactions and items, the task of ARM is to acquire consumption habits of customers by discovering the relationships between itemsets (sets of items). In this paper, we address ARM in the quantum settings and propose a quantum algorithm for the key part of ARM, finding frequent itemsets from the candidate itemsets and acquiring their supports. Specifically, for the case in which there are Mf(k ) frequent k -itemsets in the Mc(k ) candidate k -itemsets (Mf(k )≤Mc(k ) ), our algorithm can efficiently mine these frequent k -itemsets and estimate their supports by using parallel amplitude estimation and amplitude amplification with complexity O (k/√{Mc(k )Mf(k ) } ɛ ) , where ɛ is the error for estimating the supports. Compared with the classical counterpart, i.e., the classical sampling-based algorithm, whose complexity is O (k/Mc(k ) ɛ2) , our quantum algorithm quadratically improves the dependence on both ɛ and Mc(k ) in the best case when Mf(k )≪Mc(k ) and on ɛ alone in the worst case when Mf(k )≈Mc(k ) .

  13. Costs and cost-effectiveness of 9-valent human papillomavirus (HPV) vaccination in two East African countries.

    PubMed

    Kiatpongsan, Sorapop; Kim, Jane J

    2014-01-01

    Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15-30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain.

  14. Costs and Cost-Effectiveness of 9-Valent Human Papillomavirus (HPV) Vaccination in Two East African Countries

    PubMed Central

    Kiatpongsan, Sorapop; Kim, Jane J.

    2014-01-01

    Background Current prophylactic vaccines against human papillomavirus (HPV) target two of the most oncogenic types, HPV-16 and -18, which contribute to roughly 70% of cervical cancers worldwide. Second-generation HPV vaccines include a 9-valent vaccine, which targets five additional oncogenic HPV types (i.e., 31, 33, 45, 52, and 58) that contribute to another 15–30% of cervical cancer cases. The objective of this study was to determine a range of vaccine costs for which the 9-valent vaccine would be cost-effective in comparison to the current vaccines in two less developed countries (i.e., Kenya and Uganda). Methods and Findings The analysis was performed using a natural history disease simulation model of HPV and cervical cancer. The mathematical model simulates individual women from an early age and tracks health events and resource use as they transition through clinically-relevant health states over their lifetime. Epidemiological data on HPV prevalence and cancer incidence were used to adapt the model to Kenya and Uganda. Health benefit, or effectiveness, from HPV vaccination was measured in terms of life expectancy, and costs were measured in international dollars (I$). The incremental cost of the 9-valent vaccine included the added cost of the vaccine counterbalanced by costs averted from additional cancer cases prevented. All future costs and health benefits were discounted at an annual rate of 3% in the base case analysis. We conducted sensitivity analyses to investigate how infection with multiple HPV types, unidentifiable HPV types in cancer cases, and cross-protection against non-vaccine types could affect the potential cost range of the 9-valent vaccine. In the base case analysis in Kenya, we found that vaccination with the 9-valent vaccine was very cost-effective (i.e., had an incremental cost-effectiveness ratio below per-capita GDP), compared to the current vaccines provided the added cost of the 9-valent vaccine did not exceed I$9.7 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$5.2 and I$16.2 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP where the 9-valent vaccine would be considered cost-effective, the thresholds of added costs associated with the 9-valent vaccine were I$27.3, I$14.5 and I$45.3 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. In Uganda, vaccination with the 9-valent vaccine was very cost-effective when the added cost of the 9-valent vaccine did not exceed I$8.3 per vaccinated girl. To be considered very cost-effective, the added cost per vaccinated girl could go up to I$4.5 and I$13.7 in the worst-case and best-case scenarios, respectively. At a willingness-to-pay threshold of three times per-capita GDP, the thresholds of added costs associated with the 9-valent vaccine were I$23.4, I$12.6 and I$38.4 per vaccinated girl for the base case, worst-case and best-case scenarios, respectively. Conclusions This study provides a threshold range of incremental costs associated with the 9-valent HPV vaccine that would make it a cost-effective intervention in comparison to currently available HPV vaccines in Kenya and Uganda. These prices represent a 71% and 61% increase over the price offered to the GAVI Alliance ($5 per dose) for the currently available 2- and 4-valent vaccines in Kenya and Uganda, respectively. Despite evidence of cost-effectiveness, critical challenges around affordability and feasibility of HPV vaccination and other competing needs in low-resource settings such as Kenya and Uganda remain. PMID:25198104

  15. A bioinspired collision detection algorithm for VLSI implementation

    NASA Astrophysics Data System (ADS)

    Cuadri, J.; Linan, G.; Stafford, R.; Keil, M. S.; Roca, E.

    2005-06-01

    In this paper a bioinspired algorithm for collision detection is proposed, based on previous models of the locust (Locusta migratoria) visual system reported by F.C. Rind and her group, in the University of Newcastle-upon-Tyne. The algorithm is suitable for VLSI implementation in standard CMOS technologies as a system-on-chip for automotive applications. The working principle of the algorithm is to process a video stream that represents the current scenario, and to fire an alarm whenever an object approaches on a collision course. Moreover, it establishes a scale of warning states, from no danger to collision alarm, depending on the activity detected in the current scenario. In the worst case, the minimum time before collision at which the model fires the collision alarm is 40 msec (1 frame before, at 25 frames per second). Since the average time to successfully fire an airbag system is 2 msec, even in the worst case, this algorithm would be very helpful to more efficiently arm the airbag system, or even take some kind of collision avoidance countermeasures. Furthermore, two additional modules have been included: a "Topological Feature Estimator" and an "Attention Focusing Algorithm". The former takes into account the shape of the approaching object to decide whether it is a person, a road line or a car. This helps to take more adequate countermeasures and to filter false alarms. The latter centres the processing power into the most active zones of the input frame, thus saving memory and processing time resources.

  16. Geometrical Design of a Scalable Overlapping Planar Spiral Coil Array to Generate a Homogeneous Magnetic Field.

    PubMed

    Jow, Uei-Ming; Ghovanloo, Maysam

    2012-12-21

    We present a design methodology for an overlapping hexagonal planar spiral coil (hex-PSC) array, optimized for creation of a homogenous magnetic field for wireless power transmission to randomly moving objects. The modular hex-PSC array has been implemented in the form of three parallel conductive layers, for which an iterative optimization procedure defines the PSC geometries. Since the overlapping hex-PSCs in different layers have different characteristics, the worst case coil-coupling condition should be designed to provide the maximum power transfer efficiency (PTE) in order to minimize the spatial received power fluctuations. In the worst case, the transmitter (Tx) hex-PSC is overlapped by six PSCs and surrounded by six other adjacent PSCs. Using a receiver (Rx) coil, 20 mm in radius, at the coupling distance of 78 mm and maximum lateral misalignment of 49.1 mm (1/√3 of the PSC radius) we can receive power at a PTE of 19.6% from the worst case PSC. Furthermore, we have studied the effects of Rx coil tilting and concluded that the PTE degrades significantly when θ > 60°. Solutions are: 1) activating two adjacent overlapping hex-PSCs simultaneously with out-of-phase excitations to create horizontal magnetic flux and 2) inclusion of a small energy storage element in the Rx module to maintain power in the worst case scenarios. In order to verify the proposed design methodology, we have developed the EnerCage system, which aims to power up biological instruments attached to or implanted in freely behaving small animal subjects' bodies in long-term electrophysiology experiments within large experimental arenas.

  17. Failed State 2030: Nigeria - A Case Study

    DTIC Science & Technology

    2011-02-01

    disastrous ecological conditions in its Niger Delta region, and is fighting one of the modern world?s worst legacies of political and economic corruption. A ...world’s worst legacies of political and economic corruption. A nation with more than 350 ethnic groups, 250 languages, and three distinct religious...happening in the world. The discus- sion herein is a mix of cultural sociology, political science, econom - ics, military science (sometimes called

  18. Saving Time, Saving Money: The Economics of Unclogging America's Worst Bottlenecks

    DOT National Transportation Integrated Search

    2000-01-01

    A 1999 study by the American Highway Users Alliance entitled "Unclogging America's Arteries: Prescriptions for Healthier Highways" identified the 166 worst bottlenecks in the country and evaluated the benefits of removing them. By assigning monetary ...

  19. Evaluations of the conformational search accuracy of CAMDAS using experimental three-dimensional structures of protein-ligand complexes

    NASA Astrophysics Data System (ADS)

    Oda, A.; Yamaotsu, N.; Hirono, S.; Takano, Y.; Fukuyoshi, S.; Nakagaki, R.; Takahashi, O.

    2013-08-01

    CAMDAS is a conformational search program, through which high temperature molecular dynamics (MD) calculations are carried out. In this study, the conformational search ability of CAMDAS was evaluated using structurally known 281 protein-ligand complexes as a test set. For the test, the influences of initial settings and initial conformations on search results were validated. By using the CAMDAS program, reasonable conformations whose root mean square deviations (RMSDs) in comparison with crystal structures were less than 2.0 Å could be obtained from 96% of the test set even though the worst initial settings were used. The success rate was comparable to those of OMEGA, and the errors of CAMDAS were less than those of OMEGA. Based on the results obtained using CAMDAS, the worst RMSD was around 2.5 Å, although the worst value obtained was around 4.0 Å using OMEGA. The results indicated that CAMDAS is a robust and versatile conformational search method and that it can be used for a wide variety of small molecules. In addition, the accuracy of a conformational search in relation to this study was improved by longer MD calculations and multiple MD simulations.

  20. JPL Thermal Design Modeling Philosophy and NASA-STD-7009 Standard for Models and Simulations - A Case Study

    NASA Technical Reports Server (NTRS)

    Avila, Arturo

    2011-01-01

    The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.

  1. Feedback system design with an uncertain plant

    NASA Technical Reports Server (NTRS)

    Milich, D.; Valavani, L.; Athans, M.

    1986-01-01

    A method is developed to design a fixed-parameter compensator for a linear, time-invariant, SISO (single-input single-output) plant model characterized by significant structured, as well as unstructured, uncertainty. The controller minimizes the H(infinity) norm of the worst-case sensitivity function over the operating band and the resulting feedback system exhibits robust stability and robust performance. It is conjectured that such a robust nonadaptive control design technique can be used on-line in an adaptive control system.

  2. A Multi-Armed Bandit Approach to Following a Markov Chain

    DTIC Science & Technology

    2017-06-01

    focus on the House to Café transition (p1,4). We develop a Multi-Armed Bandit approach for efficiently following this target, where each state takes the...and longitude (each state corresponding to a physical location and a small set of activities). The searcher would then apply our approach on this...the target’s transition probability and the true probability over time. Further, we seek to provide upper bounds (i.e., worst case bounds) on the

  3. A lock-free priority queue design based on multi-dimensional linked lists

    DOE PAGES

    Dechev, Damian; Zhang, Deli

    2015-04-03

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  4. A lock-free priority queue design based on multi-dimensional linked lists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechev, Damian; Zhang, Deli

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  5. Solar Particle Event Exposures and Local Tissue Environments in Free Space and on Martian Surface

    NASA Technical Reports Server (NTRS)

    Kim, M. Y.; Shinn, J. L.; Singleterry, R. C.; Atwell, W.; Wilson, J. W.

    1999-01-01

    Solar particle events (SPEs) are a concern to space missions outside Earth s geomagnetic field. The September 29, 1989 SPE is the largest ground-level event since February 23, 1956. It is an iron-rich event for which the spectra are well measured. Because ten times this event matches the ground level data of the February 1956 SPE, it is suggested that an event with ten-times the scaled spectra of the September 29, 1989 SPE be used as a worst case SPE for spacecraft design. For the worst case SPE, the input spectra were reconstructed using Nymmik's (1995) model for protons, the O and Fe ion spectra of Tylka et al. (1997) to evaluate the iron enhancement ratio, and the Solar Energetic Particle Baseline (SEPB) composition of McGuire et al. (1986) for the heavy ions. The necessary transport properties of the shielding materials and the astronaut s body tissues are evaluated using the HZETRN code. Three shield configurations (assumed to be aluminum) are considered: space suit taken as 0.3 g/sq cm, helmet/pressure vessel as 1 g/sq cm, and equipment room of 5 g/sq cm. A shelter is taken as 10 g/sq cm on the Martian surface. The effect of shielding due to the Martian atmosphere is included. The astronaut geometry is taken from the computerized anatomical man (CAM) model.

  6. Cytogenetic study of a patient with infant acute lymphoblastic leukemia using GTG-banding and chromosome painting.

    PubMed

    Alter, D; Mark, H F

    2000-10-01

    Numerical and structural chromosomal abnormalities occur in up to 90% of cases of childhood acute lymphoblastic leukemia (ALL). Two-thirds of these abnormalities are recurrent. The most common abnormalities are pseudodiploidy and t(1;19), occurring 40 and 5-6% of the time. Hyperdiploidy has the best prognosis, with an 80-90% 5-year survival. The 4;11 translocation has the worst prognosis, with a 10-35% 5-year survival. We report a patient with infant acute lymphoblastic leukemia and nonrecurrent rearrangements of chromosomes 10 and 11. Structural rearrangements between chromosomes 10 and 11 have been observed in 0.5% of all cases of childhood ALL with cytogenetic abnormalities. The identification of the apparently unique structural abnormalities was achieved using fluorescent in situ hybridization (FISH) with chromosome 10- and chromosome 11-specific painting probes as an adjunct to conventional cytogenetics. As is often the case, suboptimal preparations often preclude unequivocal identification of complex rearrangements by conventional banding techniques. The cytogenetic diagnosis of our patient was established as 46,XY, der(10)-t(10;11)(p15;q14)t(10;11)(q25;p11), der(11)t(10;11)(p15;q14)t(10;11)-(q25;p11). The benefits of FISH serve to increase the resolution of detection for chromosomal abnormalities and the understanding of the pathogenic mechanisms of childhood ALL. Copyright 2000 Academic Press.

  7. Real-time determination of the worst tsunami scenario based on Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya

    2016-04-01

    In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.

  8. Novel selective TOCSY method enables NMR spectral elucidation of metabolomic mixtures

    NASA Astrophysics Data System (ADS)

    MacKinnon, Neil; While, Peter T.; Korvink, Jan G.

    2016-11-01

    Complex mixture analysis is routinely encountered in NMR-based investigations. With the aim of component identification, spectral complexity may be addressed chromatographically or spectroscopically, the latter being favored to reduce sample handling requirements. An attractive experiment is selective total correlation spectroscopy (sel-TOCSY), which is capable of providing tremendous spectral simplification and thereby enhancing assignment capability. Unfortunately, isolating a well resolved resonance is increasingly difficult as the complexity of the mixture increases and the assumption of single spin system excitation is no longer robust. We present TOCSY optimized mixture elucidation (TOOMIXED), a technique capable of performing spectral assignment particularly in the case where the assumption of single spin system excitation is relaxed. Key to the technique is the collection of a series of 1D sel-TOCSY experiments as a function of the isotropic mixing time (τm), resulting in a series of resonance intensities indicative of the underlying molecular structure. By comparing these τm -dependent intensity patterns with a library of pre-determined component spectra, one is able to regain assignment capability. After consideration of the technique's robustness, we tested TOOMIXED firstly on a model mixture. As a benchmark we were able to assign a molecule with high confidence in the case of selectively exciting an isolated resonance. Assignment confidence was not compromised when performing TOOMIXED on a resonance known to contain multiple overlapping signals, and in the worst case the method suggested a follow-up sel-TOCSY experiment to confirm an ambiguous assignment. TOOMIXED was then demonstrated on two realistic samples (whisky and urine), where under our conditions an approximate limit of detection of 0.6 mM was determined. Taking into account literature reports for the sel-TOCSY limit of detection, the technique should reach on the order of 10 μ M sensitivity. We anticipate this technique will be highly attractive to various analytical fields facing mixture analysis, including metabolomics, foodstuff analysis, pharmaceutical analysis, and forensics.

  9. Thermal Performance of LANDSAT-7 ETM+ Instruments During First Year in Flight

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.

    2000-01-01

    Landsat-7 was successfully launched into orbit on April 15, 1999. After devoting three months to the t bakeout and cool-down of the radiative cooler, and on- t orbit checkout, the Enhanced Thematic Mapper Plus (ETM+) began the normal imaging phase of the mission in mid-July 1999. This paper presents the thermal performance of the ETM+ from mid-July 1999 to mid-May 2000. The flight temperatures are compared to the yellow temperature limits, and worst cold case and worst hot case flight temperature predictions in the 15-orbit mission design profile. The flight temperature predictions were generated by a thermal model, which was correlated to the observatory thermal balance test data. The yellow temperature limits were derived from the flight temperature predictions, plus some margins. The yellow limits work well in flight, so that only several minor changes to them were needed. Overall, the flight temperatures and flight temperature predictions have good agreement. Based on the ETM+ thermal vacuum qualification test, new limits on the imaging time are proposed to increase the average duty cycle, and to resolve the problems experienced by the Mission Operation Team.

  10. Dermal uptake and percutaneous penetration of ten flame retardants in a human skin ex vivo model.

    PubMed

    Frederiksen, Marie; Vorkamp, Katrin; Jensen, Niels Martin; Sørensen, Jens Ahm; Knudsen, Lisbeth E; Sørensen, Lars S; Webster, Thomas F; Nielsen, Jesper B

    2016-11-01

    The dermal uptake and percutaneous penetration of ten organic flame retardants was measured using an ex vivo human skin model. The studied compounds were DBDPE, BTBPE, TBP-DBPE, EH-TBB, BEH-TEBP, α, β and γ-HBCDD as well as syn- and anti-DDC-CO. Little or none of the applied flame retardants was recovered in either type of the receptor fluids used (physiological and worst-case). However, significant fractions were recovered in the skin depot, particularly in the upper skin layers. The primary effect of the worst-case receptor fluid was deeper penetration into the skin. The recovered mass was used to calculate lower- and upper-bound permeability coefficients kp. Despite large structural variation between the studied compounds, a clear, significant decreasing trend of kp was observed with increasing log Kow. The results indicate that the dermis may provide a significant barrier for these highly lipophilic compounds. However, based on our results, dermal uptake should be considered in exposure assessments, though it may proceed in a time-lagged manner compared to less hydrophobic compounds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Parameter Impact on Sharing Studies Between UAS CNPC Satellite Transmitters and Terrestrial Systems

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Wilson, Jeffrey D.; Bishop, William D.

    2015-01-01

    In order to provide a control and non-payload communication (CNPC) link for civil-use unmanned aircraft systems (UAS) when operating in beyond-line-of-sight (BLOS) conditions, satellite communication links are generally required. The International Civil Aviation Organization (ICAO) has determined that the CNPC link must operate over protected aviation safety spectrum allocations. Although a suitable allocation exists in the 5030-5091 MHz band, no satellites provide operations in this band and none are currently planned. In order to avoid a very lengthy delay in the deployment of UAS in BLOS conditions, it has been proposed to use existing satellites operating in the Fixed Satellite Service (FSS), of which many operate in several spectrum bands. Regulatory actions by the International Telecommunications Union (ITU) are needed to enable such a use on an international basis, and indeed Agenda Item (AI) 1.5 for the 2015 World Radiocommunication Conference (WRC) was established to decide on the enactment of possible regulatory provisions. As part of the preparation for AI 1.5, studies on the sharing FSS bands between existing services and CNPC for UAS are being contributed by NASA and others. These studies evaluate the potential impact of satellite CNPC transmitters operating from UAS on other in-band services, and on the potential impact of other in-band services on satellite CNPC receivers operating on UAS platforms. Such studies are made more complex by the inclusion of what are essentially moving FSS earth stations, compared to typical sharing studies between fixed elements. Hence, the process of determining the appropriate technical parameters for the studies meets with difficulty. In order to enable a sharing study to be completed in a less-than-infinite amount of time, the number of parameters exercised must be greatly limited. Therefore, understanding the impact of various parameter choices is accomplished through selectivity analyses. In the case of sharing studies for AI 1.5, identification of worst-case parameters allows the studies to be focused on worst-case scenarios with assurance that other parameter combinations will yield comparatively better results and therefore do not need to be fully analyzed. In this paper, the results of such sensitivity analyses are presented for the case of sharing between UAS CNPC satellite transmitters and terrestrial receivers using the Fixed Service (FS) operating in the same bands, and the implications of these analyses on sharing study results.

  12. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  13. Solving a supply chain scheduling problem with non-identical job sizes and release times by applying a novel effective heuristic algorithm

    NASA Astrophysics Data System (ADS)

    Pei, Jun; Liu, Xinbao; Pardalos, Panos M.; Fan, Wenjuan; Wang, Ling; Yang, Shanlin

    2016-03-01

    Motivated by applications in manufacturing industry, we consider a supply chain scheduling problem, where each job is characterised by non-identical sizes, different release times and unequal processing times. The objective is to minimise the makespan by making batching and sequencing decisions. The problem is formalised as a mixed integer programming model and proved to be strongly NP-hard. Some structural properties are presented for both the general case and a special case. Based on these properties, a lower bound is derived, and a novel two-phase heuristic (TP-H) is developed to solve the problem, which guarantees to obtain a worst case performance ratio of ?. Computational experiments with a set of different sizes of random instances are conducted to evaluate the proposed approach TP-H, which is superior to another two heuristics proposed in the literature. Furthermore, the experimental results indicate that TP-H can effectively and efficiently solve large-size problems in a reasonable time.

  14. Evaluation of trace analyte identification in complex matrices by low-resolution gas chromatography--Mass spectrometry through signal simulation.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-04-01

    The identification of trace levels of compounds in complex matrices by conventional low-resolution gas chromatography hyphenated with mass spectrometry is based in the comparison of retention times and abundance ratios of characteristic mass spectrum fragments of analyte peaks from calibrators with sample peaks. Statistically sound criteria for the comparison of these parameters were developed based on the normal distribution of retention times and the simulation of possible non-normal distribution of correlated abundances ratios. The confidence level used to set the statistical maximum and minimum limits of parameters defines the true positive rates of identifications. The false positive rate of identification was estimated from worst-case signal noise models. The estimated true and false positive identifications rate from one retention time and two correlated ratios of three fragments abundances were combined using simple Bayes' statistics to estimate the probability of compound identification being correct designated examination uncertainty. Models of the variation of examination uncertainty with analyte quantity allowed the estimation of the Limit of Examination as the lowest quantity that produced "Extremely strong" evidences of compound presence. User friendly MS-Excel files are made available to allow the easy application of developed approach in routine and research laboratories. The developed approach was successfully applied to the identification of chlorpyrifos-methyl and malathion in QuEChERS method extracts of vegetables with high water content for which the estimated Limit of Examination is 0.14 mg kg(-1) and 0.23 mg kg(-1) respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. SU-F-R-39: Effects of Radiation Dose Reduction On Renal Cell Carcinoma Discrimination Using Multi-Phasic CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahi-Anwar, M; Young, S; Lo, P

    Purpose: A method to discriminate different types of renal cell carcinoma (RCC) was developed using attenuation values observed in multiphasic contrast-enhanced CT. This work evaluates the sensitivity of this RCC discrimination task at different CT radiation dose levels. Methods: We selected 5 cases of kidney lesion patients who had undergone four-phase CT scans covering the abdomen to the lilac crest. Through an IRB-approved study, the scans were conducted on 64-slice CT scanners (Definition AS/Definition Flash, Siemens Healthcare) using automatic tube-current modulation (TCM). The protocol included an initial baseline unenhanced scan, followed by three post-contrast injection phases. CTDIvol (32 cm phantom)more » measured between 9 to 35 mGy for any given phase. As a preliminary study, we limited the scope to the cortico-medullary phase—shown previously to be the most discriminative phase. A previously validated method was used to simulate a reduced dose acquisition via adding noise to raw CT sinogram data, emulating corresponding images at simulated doses of 50%, 25%, and 10%. To discriminate the lesion subtype, ROIs were placed in the most enhancing region of the lesion. The mean HU value of an ROI was extracted and used to discriminate to the worst-case RCC subtype, ranked in the order of clear cell, papillary, chromophobe and the benign oncocytoma. Results: Two patients exhibited a change of worst case RCC subtype between original and simulated scans, at 25% and 10% doses. In one case, the worst-case RCC subtype changed from oncocytoma to chromophobe at 10% and 25% doses, while the other case changed from oncocytoma to clear cell at 10% dose. Conclusion: Based on preliminary results from an initial cohort of 5 patients, worst-case RCC subtypes remained constant at all simulated dose levels except for 2 patients. Further study conducted on more patients will be needed to confirm our findings. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant Support from: U01 CA181156.« less

  16. Combined effects of space flight factors and radiation on humans

    NASA Technical Reports Server (NTRS)

    Todd, P.; Pecaut, M. J.; Fleshner, M.; Clarkson, T. W. (Principal Investigator)

    1999-01-01

    The probability that a dose of ionizing radiation kills a cell is about 10,000 times the probability that the cell will be transformed to malignancy. On the other hand, the number of cells killed required to significantly impact health is about 10,000 times the number that must be transformed to cause a late malignancy. If these two risks, cell killing and malignant transformation, are about equal, then the risk that occurs during a mission is more significant than the risk that occurs after a mission. The latent period for acute irradiation effects (cell killing) is about 2-4 weeks; the latent period for malignancy is 10-20 years. If these statements are approximately true, then the impact of cell killing on health in the low-gravity environment of space flight should be examined to establish an estimate of risk. The objective of this study is to synthesize data and conclusions from three areas of space biology and environmental health to arrive at rational risk assessment for radiations received by spacecraft crews: (1) the increased physiological demands of the space flight environment; (2) the effects of the space flight environment on physiological systems; and (3) the effects of radiation on physiological systems. One physiological system has been chosen: the immune response and its components, consisting of myeloid and lymphoid proliferative cell compartments. Best-case and worst-case scenarios are considered. In the worst case, a doubling of immune-function demand, accompanied by a halving of immune capacity, would reduce the endangering dose to a crew member to around 1 Gy.

  17. Architectural impact of FDDI network on scheduling hard real-time traffic

    NASA Technical Reports Server (NTRS)

    Agrawal, Gopal; Chen, Baio; Zhao, Wei; Davari, Sadegh

    1991-01-01

    The architectural impact on guaranteeing synchronous message deadlines in FDDI (Fiber Distributed Data Interface) token ring networks is examined. The FDDI network does not have facility to support (global) priority arbitration which is a useful facility for scheduling hard real time activities. As a result, it was found that the worst case utilization of synchronous traffic in an FDDI network can be far less than that in a centralized single processor system. Nevertheless, it is proposed and analyzed that a scheduling method can guarantee deadlines of synchronous messages having traffic utilization up to 33 pct., the highest to date.

  18. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    PubMed

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  20. Cost-efficacy of biologic therapies for psoriatic arthritis from the perspective of the Taiwanese healthcare system.

    PubMed

    Yang, Tsong-Shing; Chi, Ching-Chi; Wang, Shu-Hui; Lin, Jing-Chi; Lin, Ko-Ming

    2016-10-01

    Biologic therapies are more effective but more costly than conventional therapies in treating psoriatic arthritis. To evaluate the cost-efficacy of etanercept, adalimumab and golimumab therapies in treating active psoriatic arthritis in a Taiwanese setting. We conducted a meta-analysis of randomized placebo-controlled trials to calculate the incremental efficacy of etanercept, adalimumab and golimumab, respectively, in achieving Psoriatic Arthritis Response Criteria (PsARC) and a 20% improvement in the American College of Rheumatology score (ACR20). The base, best, and worst case incremental cost-effectiveness ratios (ICERs) for one subject to achieve PsARC and ACR20 were calculated. The annual ICER per PsARC responder were US$27 047 (best scenario US$16 619; worst scenario US$31 350), US$39 339 (best scenario US$31 846; worst scenario US$53 501) and US$27 085 (best scenario US$22 716; worst scenario US$33 534) for etanercept, adalimumab and golimumab, respectively. The annual ICER per ACR20 responder were US$27 588 (best scenario US$20 900; worst scenario US$41 800), US$39 339 (best scenario US$25 236; worst scenario US$83 595) and US$33 534 (best scenario US$27 616; worst scenario US$44 013) for etanercept, adalimumab and golimumab, respectively. In a Taiwanese setting, etanercept had the lowest annual costs per PsARC and ACR20 responder, while adalimumab had the highest annual costs per PsARC and ACR responder. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.

  1. Assessing oral bioaccessibility of trace elements in soils under worst-case scenarios by automated in-line dynamic extraction as a front end to inductively coupled plasma atomic emission spectrometry.

    PubMed

    Rosende, María; Magalhães, Luis M; Segundo, Marcela A; Miró, Manuel

    2014-09-09

    A novel biomimetic extraction procedure that allows for the in-line handing of ≥400 mg solid substrates is herein proposed for automatic ascertainment of trace element (TE) bioaccessibility in soils under worst-case conditions as per recommendations of ISO norms. A unified bioaccessibility/BARGE method (UBM)-like physiological-based extraction test is evaluated for the first time in a dynamic format for accurate assessment of in-vitro bioaccessibility of Cr, Cu, Ni, Pb and Zn in forest and residential-garden soils by on-line coupling of a hybrid flow set-up to inductively coupled plasma atomic emission spectrometry. Three biologically relevant operational extraction modes mimicking: (i) gastric juice extraction alone; (ii) saliva and gastric juice composite in unidirectional flow extraction format and (iii) saliva and gastric juice composite in a recirculation mode were thoroughly investigated. The extraction profiles of the three configurations using digestive fluids were proven to fit a first order reaction kinetic model for estimating the maximum TE bioaccessibility, that is, the actual worst-case scenario in human risk assessment protocols. A full factorial design, in which the sample amount (400-800 mg), the extractant flow rate (0.5-1.5 mL min(-1)) and the extraction temperature (27-37°C) were selected as variables for the multivariate optimization studies in order to obtain the maximum TE extractability. Two soils of varied physicochemical properties were analysed and no significant differences were found at the 0.05 significance level between the summation of leached concentrations of TE in gastric juice plus the residual fraction and the total concentration of the overall assayed metals determined by microwave digestion. These results showed the reliability and lack of bias (trueness) of the automatic biomimetic extraction approach using digestive juices. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Scheduling Independent Partitions in Integrated Modular Avionics Systems

    PubMed Central

    Du, Chenglie; Han, Pengcheng

    2016-01-01

    Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013

  3. Tolerance allocation for an electronic system using neural network/Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque

    2001-12-01

    The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.

  4. Fault-tolerant clock synchronization in distributed systems

    NASA Technical Reports Server (NTRS)

    Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.

    1990-01-01

    Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.

  5. Fixation effects on the release of copper, chromium and arsenic from CCA-C treated marine piles

    Treesearch

    Stan Lebow

    1999-01-01

    This study sought to determine the effect of fixation time and temperature on the release of copper, chromium and arsenic from treated marine piles immersed in seawater under "worst case" conditions. Sections of piles were CCA-C treated to a target retention of 2.5 lbs/ft3) (40 kg/m3) and then allowed to Condition at 36°F (2°C) for either 3, 7 or 20 days. As...

  6. SpaceX Recovery Training

    NASA Image and Video Library

    2018-02-28

    On February 28, SpaceX completed a demonstration of their ability to recover the crew and capsule after a nominal water splashdown. This marks an important recovery milestone and joint test. The timeline requirement from splashdown to crew egress onboard the ship is one hour, and the recovery team demonstrated that they can accomplish this operation under worst-case conditions in under 45 minutes. Further improvements are planned to shorten the recovery time even more as the team works to build a process that is safe, repeatable, and efficient.

  7. Operating Policies for Non- stationary Two-Echelon Inventory Systems for Reparable Items.

    DTIC Science & Technology

    1986-05-01

    resupply policy. Even under an HCP, we might want to change the resupply policy at management igtervention times to reflect what we predict will happen...management is concerned with the worst performance predicted during the horizon. Regardless of the average performance over the horizon, management may not...locations in DCi(tm-ll tm ) and INi(tm-l tm) . Case 3 a: ASi(tm I ) > ASi(tm); INi(tm- lstm ) empty. Disposals must be made to lover the asset positions

  8. Buried Target Imaging: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Ghaderi Aram, Morteza; Dehmollaian, Mojtaba; Khaleghi, Ali

    2017-12-01

    A wide variety of qualitative methods have been proposed for microwave imaging. It is difficult to select only one of these methods based on a priori information and measurement equipment to achieve a reliable reconstruction. Various arrangements for antennas to be used in, for instance, have been proposed which have direct impacts on the complexity of inverse methods as well as the quality of output images. In this study, four qualitative methods of the linear sampling method (LSM), time reversal (TR), diffraction tomography (DT), and back-projection (BP) have been reviewed in a 2D scenario; the performance of the methods is compared within the same framework of a multi-static configuration. The goal is to compare their resolutions and determine their advantages and drawbacks. It is shown that LSM provides the best azimuth resolution but the worst range resolution. It is almost invariant to dielectric contrast and is appropriate for a wide range of dielectric contrasts and relatively large objects. It is also shown that at relatively low dielectric contrasts, TR images are most similar to the true object, show fewer artifacts, and offer high immunity to noise. While suffering from more artifacts due to the presence of some ghost images, DT offers the best range resolution. The results also show that BP has the worst azimuth resolution when reconstructing deeply-buried targets, although its implementation is straightforward and not computationally complex.

  9. Climate Change Impacts on the Tree of Life: Changes in Phylogenetic Diversity Illustrated for Acropora Corals

    PubMed Central

    Faith, Daniel P.; Richards, Zoe T.

    2012-01-01

    The possible loss of whole branches from the tree of life is a dramatic, but under-studied, biological implication of climate change. The tree of life represents an evolutionary heritage providing both present and future benefits to humanity, often in unanticipated ways. Losses in this evolutionary (evo) life-support system represent losses in “evosystem” services, and are quantified using the phylogenetic diversity (PD) measure. High species-level biodiversity losses may or may not correspond to high PD losses. If climate change impacts are clumped on the phylogeny, then loss of deeper phylogenetic branches can mean disproportionately large PD loss for a given degree of species loss. Over time, successive species extinctions within a clade each may imply only a moderate loss of PD, until the last species within that clade goes extinct, and PD drops precipitously. Emerging methods of “phylogenetic risk analysis” address such phylogenetic tipping points by adjusting conservation priorities to better reflect risk of such worst-case losses. We have further developed and explored this approach for one of the most threatened taxonomic groups, corals. Based on a phylogenetic tree for the corals genus Acropora, we identify cases where worst-case PD losses may be avoided by designing risk-averse conservation priorities. We also propose spatial heterogeneity measures changes to assess possible changes in the geographic distribution of corals PD. PMID:24832524

  10. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.

    PubMed

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.

  11. Relaxing USOS Solar Array Constraints for Russian Vehicle Undocking

    NASA Technical Reports Server (NTRS)

    Menkin, Evgeny; Schrock, Mariusz; Schrock, Rita; Zaczek, Mariusz; Gomez, Susan; Lee, Roscoe; Bennet, George

    2011-01-01

    With the retirement of Space Shuttle cargo delivery capability and the ten year life extension of the International Space Station (ISS) more emphasis is being put on preservation of the service life of ISS critical components. Current restrictions on the United States Orbital Segment (USOS) Solar Array (SA) positioning during Russian Vehicle (RV) departure from ISS nadir and zenith ports cause SA to be positioned in the plume field of Service Module thrusters and lead to degradation of SAs as well as potential damage to Sun tracking Beta Gimbal Assemblies (BGA). These restrictions are imposed because of the single fault tolerant RV Motion Control System (MCS), which does not meet ISS Safety requirements for catastrophic hazards and dictates 16 degree Solar Array Rotary Joint position, which ensures that ISS and RV relative motion post separation, does lead to collision. The purpose of this paper is to describe a methodology and the analysis that was performed to determine relative motion trajectories of the ISS and separating RV for nominal and contingency cases. Analysis was performed in three phases that included ISS free drift prior to Visiting Vehicle separation, ISS and Visiting Vehicle relative motion analysis and clearance analysis. First, the ISS free drift analysis determined the worst case attitude and attitude rate excursions prior to RV separation based on a series of different configurations and mass properties. Next, the relative motion analysis calculated the separation trajectories while varying the initial conditions, such as docking mechanism performance, Visiting Vehicle MCS failure, departure port location, ISS attitude and attitude rates at the time of separation, etc. The analysis employed both orbital mechanics and rigid body rotation calculations while accounting for various atmospheric conditions and gravity gradient effects. The resulting relative motion trajectories were then used to determine the worst case separation envelopes during the clearance analysis. Analytical models were developed individually for each stage and the results were used to build initial conditions for the following stages. In addition to the analysis approach, this paper also discusses the analysis results, showing worst case relative motion envelopes, the recommendations for ISS appendage positioning and the suggested approach for future analyses.

  12. Applying MDA to SDR for Space to Model Real-time Issues

    NASA Technical Reports Server (NTRS)

    Blaser, Tammy M.

    2007-01-01

    NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.

  13. Level II scour analysis for Bridge 120 (LEICUS00070120) on U.S. Route 7, crossing the Leicester River, Leicester, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.; Severance, Timothy

    1997-01-01

    Contraction scour for all modelled flows ranged from 3.8 to 6.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 4.0 to 6.7 ft. The worst-case abutment scour also occurred at the 500-year discharge. Pier scour ranged from 9.1 to 10.2. The worst-case pier scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  14. Level II scour analysis for Bridge 49 (WODSTH00990049) on Town Highway 99, crossing Gulf Brook, Woodstock, Vermont

    USGS Publications Warehouse

    Olson, Scott A.; Hammond, Robert E.

    1996-01-01

    Contraction scour for all modelled flows ranged from 0.0 to 0.9 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour at the left abutment ranged from 3.1 to 10.3 ft. with the worst-case occurring at the 500-year discharge. Abutment scour at the right abutment ranged from 6.4 to 10.4 ft. with the worst-case occurring at the 100-year discharge.Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  15. Level II scour analysis for Bridge 26 (JAMATH00010026) on Town Highway 1, crossing Ball Mountain Brook, Jamaica, Vermont

    USGS Publications Warehouse

    Burns, Ronda L.; Medalie, Laura

    1997-01-01

    Contraction scour for the modelled flows ranged from 1.0 to 2.7 ft. The worst-case contraction scour occurred at the incipient-overtopping discharge. Abutment scour ranged from 8.4 to 17.6 ft. The worst-case abutment scour for the right abutment occurred at the incipient-overtopping discharge. For the left abutment, the worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  16. Level II scour analysis for Bridge 37 (TOWNTH00290037) on Town Highway 29, crossing Mill Brook, Townshend, Vermont

    USGS Publications Warehouse

    Burns, R.L.; Medalie, Laura

    1998-01-01

    Contraction scour for all modelled flows ranged from 0.0 to 2.1 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 6.7 to 8.7 ft. The worst-case left abutment scour occurred at the incipient roadway-overtopping discharge. Right abutment scour ranged from 7.8 to 9.5 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A crosssection of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and Davis, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  17. A semi-quantitative World Health Organization grading scheme evaluating worst tumor differentiation predicts disease-free survival in oral squamous carcinoma patients.

    PubMed

    Jain, Dhruv; Tikku, Gargi; Bhadana, Pallavi; Dravid, Chandrashekhar; Grover, Rajesh Kumar

    2017-08-01

    We investigated World Health Organization (WHO) grading and pattern of invasion based histological schemes as independent predictors of disease-free survival, in oral squamous carcinoma patients. Tumor resection slides of eighty-seven oral squamous carcinoma patients [pTNM: I&II/III&IV-32/55] were evaluated. Besides examining various patterns of invasion, invasive front grade, predominant and worst (highest) WHO grade were recorded. For worst WHO grading, poor-undifferentiated component was estimated semi-quantitatively at advancing tumor edge (invasive growth front) in histology sections. Tumor recurrence was observed in 31 (35.6%) cases. The 2-year disease-free survival was 47% [Median: 656; follow-up: 14-1450] days. Using receiver operating characteristic curves, we defined poor-undifferentiated component exceeding 5% of tumor as the cutoff to assign an oral squamous carcinoma as grade-3, when following worst WHO grading. Kaplan-Meier curves for disease-free survival revealed prognostic association with nodal involvement, tumor size, worst WHO grading; most common pattern of invasion and invasive pattern grading score (sum of two most predominant patterns of invasion). In further multivariate analysis, tumor size (>2.5cm) and worst WHO grading (grade-3 tumors) independently predicted reduced disease-free survival [HR, 2.85; P=0.028 and HR, 3.37; P=0.031 respectively]. The inter-observer agreement was moderate for observers who semi-quantitatively estimated percentage of poor-undifferentiated morphology in oral squamous carcinomas. Our results support the value of semi-quantitative method to assign tumors as grade-3 with worst WHO grading for predicting reduced disease-free survival. Despite limitations, of the various histological tumor stratification schemes, WHO grading holds adjunctive value for its prognostic role, ease and universal familiarity. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Isolator fragmentation and explosive initiation tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, Peter; Rae, Philip John; Foley, Timothy J.

    2016-09-19

    Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without a barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX 9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimatesmore » demonstrate that even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less

  19. Isolator fragmentation and explosive initiation tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, Peter; Rae, Philip John; Foley, Timothy J.

    2015-09-30

    Three tests were conducted to evaluate the effects of firing an isolator in proximity to a barrier or explosive charge. The tests with explosive were conducted without barrier, on the basis that since any barrier will reduce the shock transmitted to the explosive, bare explosive represents the worst-case from an inadvertent initiation perspective. No reaction was observed. The shock caused by the impact of a representative plastic material on both bare and cased PBX9501 is calculated in the worst-case, 1-D limit, and the known shock response of the HE is used to estimate minimum run-to-detonation lengths. The estimates demonstrate thatmore » even 1-D impacts would not be of concern and that, accordingly, the divergent shocks due to isolator fragment impact are of no concern as initiating stimuli.« less

  20. Lunar Polar Illumination for Power Analysis

    NASA Technical Reports Server (NTRS)

    Fincannon, James

    2008-01-01

    This paper presents illumination analyses using the latest Earth-based radar digital elevation model (DEM) of the lunar south pole and an independently developed analytical tool. These results enable the optimum sizing of solar/energy storage lunar surface power systems since they quantify the timing and durations of illuminated and shadowed periods. Filtering and manual editing of the DEM based on comparisons with independent imagery were performed and a reduced resolution version of the DEM was produced to reduce the analysis time. A comparison of the DEM with lunar limb imagery was performed in order to validate the absolute heights over the polar latitude range, the accuracy of which affects the impact of long range, shadow-casting terrain. Average illumination and energy storage duration maps of the south pole region are provided for the worst and best case lunar day using the reduced resolution DEM. Average illumination fractions and energy storage durations are presented for candidate low energy storage duration south pole sites. The best site identified using the reduced resolution DEM required a 62 hr energy storage duration using a fast recharge power system. Solar and horizon terrain elevations as well as illumination fraction profiles are presented for the best identified site and the data for both the reduced resolution and high resolution DEMs compared. High resolution maps for three low energy storage duration areas are presented showing energy storage duration for the worst case lunar day, surface height, and maximum absolute surface slope.

  1. A longitudinal study of psychological distress and exposure to trauma reminders after terrorism.

    PubMed

    Glad, Kristin A; Hafstad, Gertrud S; Jensen, Tine K; Dyb, Grete

    2017-08-01

    The aim of this study was threefold: (1) to examine the type and frequency of trauma reminders reported by survivors 2.5 years after a terrorist attack; (2) to examine whether frequency of exposure to trauma reminders is associated with psychological distress and level of functioning; and (3) to compare the worst trauma reminders reported by the same survivors at 2 different time points. Participants were 261 survivors (52.1% male; Mage = 22.1 years, SD = 4.76) of the 2011 massacre on Utøya Island, Norway, who were interviewed face-to-face 14-15 and 30-32 months postterror. Participants were asked how often they had experienced various trauma reminders in the past month, which reminder was the worst, and how distressing it was. Current posttraumatic reactions were measured using the University of California at Los Angeles PTSD Reaction Index and an 8-item version of the Hopkins Symptom Checklist-25. Auditory reminders were most frequently encountered and the most distressing. Frequency of exposure to trauma reminders was positively correlated with symptoms of posttraumatic stress disorder (PTSD), anxiety, and depression, as well as negatively correlated with level of functioning, over time. Almost 20% of the survivors reported being very distressed by their worst reminder 2.5 years postterror. Less than half reported the same worst reminder at both time points. Trauma reminders, especially auditory reminders, are prevalent and distressing for years after a terrorist attack. Exposure to reminders may be important not only in the development and maintenance of PTSD but also in a broader conceptualization of posttraumatic reactions and functioning. Which reminder survivors appraise as the worst may fluctuate over time. It is important to help survivors identify and cope with reminders. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Managing risk in a challenging financial environment.

    PubMed

    Kaufman, Kenneth

    2008-08-01

    Five strategies can help hospital financial leaders balance their organizations' financial and risk positions: Understand the hospital's financial condition; Determine the desired level of risk; Consider total risk; Use a portfolio approach; Explore best-case/worst-case scenarios to measure risk.

  3. Phase Noise Influence in Long-range Coherent Optical OFDM Systems with Delay Detection, IFFT Multiplexing and FFT Demodulation

    NASA Astrophysics Data System (ADS)

    Jacobsen, Gunnar; Xu, Tianhua; Popov, Sergei; Sergeyev, Sergey; Zhang, Yimo

    2012-12-01

    We present a study of the influence of dispersion induced phase noise for CO-OFDM systems using FFT multiplexing/IFFT demultiplexing techniques (software based). The software based system provides a method for a rigorous evaluation of the phase noise variance caused by Common Phase Error (CPE) and Inter-Carrier Interference (ICI) including - for the first time to our knowledge - in explicit form the effect of equalization enhanced phase noise (EEPN). This, in turns, leads to an analytic BER specification. Numerical results focus on a CO-OFDM system with 10-25 GS/s QPSK channel modulation. A worst case constellation configuration is identified for the phase noise influence and the resulting BER is compared to the BER of a conventional single channel QPSK system with the same capacity as the CO-OFDM implementation. Results are evaluated as a function of transmission distance. For both types of systems, the phase noise variance increases significantly with increasing transmission distance. For a total capacity of 400 (1000) Gbit/s, the transmission distance to have the BER < 10-2 for the worst case CO-OFDM design is less than 800 and 460 km, respectively, whereas for a single channel QPSK system it is less than 1400 and 560 km.

  4. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Schild, S; Bues, M

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less

  5. Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety

    NASA Astrophysics Data System (ADS)

    Mikula, J. F. Kip

    2005-12-01

    This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.

  6. Risk calculation variability over time in ocular hypertensive subjects.

    PubMed

    Song, Christian; De Moraes, Carlos Gustavo; Forchheimer, Ilana; Prata, Tiago S; Ritch, Robert; Liebmann, Jeffrey M

    2014-01-01

    To investigate the longitudinal variability of glaucoma risk calculation in ocular hypertensive (OHT) subjects. We reviewed the charts of untreated OHT patients followed in a glaucoma referral practice for a minimum of 60 months. Clinical variables collected at baseline and during follow-up included age, central corneal thickness (CCT), intraocular pressure (IOP), vertical cup-to-disc ratio (VCDR), and visual field pattern standard deviation (VFPSD). These were used to calculate the 5-year risk of conversion to primary open-angle glaucoma (POAG) at each follow-up visit using the Ocular Hypertension Treatment Study and European Glaucoma Prevention Study calculator (http://ohts.wustl.edu/risk/calculator.html). We also calculated the risk of POAG conversion based on the fluctuation of measured variables over time assuming the worst case scenarios (final age, highest PSD, lowest CCT, highest IOP, and highest VCDR) and best case scenarios (baseline age, lowest PSD, highest CCT, lowest IOP, and lowest VCDR) for each patient. Risk probabilities (%) were plotted against follow-up time to generate slopes of risk change over time. We included 27 untreated OHT patients (54 eyes) followed for a mean of 98.3±18.5 months. Seven individuals (25.9%) converted to POAG during follow-up. The mean 5-year risk of conversion for all patients in the study group ranged from 2.9% to 52.3% during follow-up. The mean slope of risk change over time was 0.37±0.81% increase/y. The mean slope for patients who reached a POAG endpoint was significantly greater than for those who did not (1.3±0.78 vs. 0.042±0.52%/y, P<0.01). In each patient, the mean risk of POAG conversion increased almost 10-fold when comparing the best case scenario with the worst case scenario (5.0% vs. 45.7%, P<0.01). The estimated 5-year risk of conversion to POAG among untreated OHT patients varies significantly during follow-up, with a trend toward increasing over time. Within the same individual, the estimated risk can vary almost 10-fold based on the variability of IOP, CCT, VCDR, and VFPSD. Therefore, a single risk calculation measurement may not be sufficient for accurate risk assessment, informed decision-making by patients, and physician treatment recommendations.

  7. Estimated cost of universal public coverage of prescription drugs in Canada

    PubMed Central

    Morgan, Steven G.; Law, Michael; Daw, Jamie R.; Abraham, Liza; Martin, Danielle

    2015-01-01

    Background: With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. Methods: We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Results: Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. Interpretation: The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. PMID:25780047

  8. Estimated cost of universal public coverage of prescription drugs in Canada.

    PubMed

    Morgan, Steven G; Law, Michael; Daw, Jamie R; Abraham, Liza; Martin, Danielle

    2015-04-21

    With the exception of Canada, all countries with universal health insurance systems provide universal coverage of prescription drugs. Progress toward universal public drug coverage in Canada has been slow, in part because of concerns about the potential costs. We sought to estimate the cost of implementing universal public coverage of prescription drugs in Canada. We used published data on prescribing patterns and costs by drug type, as well as source of funding (i.e., private drug plans, public drug plans and out-of-pocket expenses), in each province to estimate the cost of universal public coverage of prescription drugs from the perspectives of government, private payers and society as a whole. We estimated the cost of universal public drug coverage based on its anticipated effects on the volume of prescriptions filled, products selected and prices paid. We selected these parameters based on current policies and practices seen either in a Canadian province or in an international comparator. Universal public drug coverage would reduce total spending on prescription drugs in Canada by $7.3 billion (worst-case scenario $4.2 billion, best-case scenario $9.4 billion). The private sector would save $8.2 billion (worst-case scenario $6.6 billion, best-case scenario $9.6 billion), whereas costs to government would increase by about $1.0 billion (worst-case scenario $5.4 billion net increase, best-case scenario $2.9 billion net savings). Most of the projected increase in government costs would arise from a small number of drug classes. The long-term barrier to the implementation of universal pharmacare owing to its perceived costs appears to be unjustified. Universal public drug coverage would likely yield substantial savings to the private sector with comparatively little increase in costs to government. © 2015 Canadian Medical Association or its licensors.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, Henry

    This research was mostly concerned with asymmetric vertical displacement event (AVDE) disruptions, which are the worst case scenario for producing a large asymmetric wall force. This is potentially a serious problem in ITER.

  10. Paper to Electronic Questionnaires: Effects on Structured Questionnaire Forms

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.

    2009-01-01

    With the use of computers, paper questionnaires are being replaced by electronic questionnaires. The formats of traditional paper questionnaires have been found to effect a subject's rating. Consequently, the transition from paper to electronic format can subtly change results. The research presented begins to determine how electronic questionnaire formats change subjective ratings. For formats where subjects used a flow chart to arrive at their rating, starting at the worst and middle ratings of the flow charts were the most accurate but subjects took slightly more time to arrive at their answers. Except for the electronic paper format, starting at the worst rating was the most preferred. The paper and electronic paper versions had the worst accuracy. Therefore, for flowchart type of questionnaires, flowcharts should start at the worst rating and work their way up to better ratings.

  11. DSN command system Mark III-78. [data processing

    NASA Technical Reports Server (NTRS)

    Stinnett, W. G.

    1978-01-01

    The Deep Space Network command Mark III-78 data processing system includes a capability for a store-and-forward handling method. The functions of (1) storing the command files at a Deep Space station; (2) attaching the files to a queue; and (3) radiating the commands to the spacecraft are straightforward. However, the total data processing capability is a result of assuming worst case, failure-recovery, or nonnominal operating conditions. Optional data processing functions include: file erase, clearing the queue, suspend radiation, command abort, resume command radiation, and close window time override.

  12. 'Worst case' methodology for the initial assessment of societal risk from proposed major accident installations.

    PubMed

    Carter, D A; Hirst, I L

    2000-01-07

    This paper considers the application of one of the weighted risk indicators used by the Major Hazards Assessment Unit (MHAU) of the Health and Safety Executive (HSE) in formulating advice to local planning authorities on the siting of new major accident hazard installations. In such cases the primary consideration is to ensure that the proposed installation would not be incompatible with existing developments in the vicinity, as identified by the categorisation of the existing developments and the estimation of individual risk values at those developments. In addition a simple methodology, described here, based on MHAU's "Risk Integral" and a single "worst case" even analysis, is used to enable the societal risk aspects of the hazardous installation to be considered at an early stage of the proposal, and to determine the degree of analysis that will be necessary to enable HSE to give appropriate advice.

  13. Complexing agents and pH influence on chemical durability of type I moulded glass containers.

    PubMed

    Biavati, Alberto; Poncini, Michele; Ferrarini, Arianna; Favaro, Nicola; Scarpa, Martina; Vallotto, Marta

    2017-06-16

    Among the factors that affect the glass surface chemical durability, pH and complexing agents presence in aqueous solution have the main role (1). Glass surface attack can be also related to the delamination issue with glass particles appearance in the pharmaceutical preparation. A few methods to check for glass containers delamination propensity and some control guidelines have been proposed (2,3). The present study emphasizes the possible synergy between a few complexing agents with pH on the borosilicate glass chemical durability. Hydrolytic attack was performed in small volume 23 ml type I glass containers autoclaved according to EP or USP for 1 hour at 121°C, in order to enhance the chemical attack due to time, temperature and the unfavourable surface/volume ratio. 0,048 M or 0.024 M (moles/liter) solutions of the acids citric, glutaric, acetic, EDTA (ethylenediaminetetraacetic acid) and sodium phosphate with water for comparison, were used for the trials. The pH was adjusted ± 0,05 units at fixed values 5,5-6,6-7-7,4-8-9 by LiOH diluted solution. Since silicon is the main glass network former, silicon release into the attack solutions was chosen as the main index of the glass surface attack and analysed by ICPAES. The work was completed by the analysis of the silicon release in the worst attack conditions, of moulded glass, soda lime type II and tubing borosilicate glass vials to compare different glass compositions and forming technologies. Surface analysis by SEM was finally performed to check for the surface status after the worst chemical attack condition by citric acid. Copyright © 2017, Parenteral Drug Association.

  14. 40 CFR 90.119 - Certification procedure-testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must select the duty cycle that will result in worst-case emission results for certification. For any... facility, in which case instrumentation and equipment specified by the Administrator must be made available... manufacturers may not use any equipment, instruments, or tools to identify malfunctioning, maladjusted, or...

  15. Efficient dual approach to distance metric learning.

    PubMed

    Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton

    2014-02-01

    Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  16. Level II scour analysis for Bridge 81 (MARSUS00020081) on U.S. Highway 2, crossing the Winooski River, Marshfield, Vermont

    USGS Publications Warehouse

    Ivanoff, Michael A.

    1997-01-01

    Contraction scour for all modelled flows ranged from 2.1 to 4.2 ft. The worst-case contraction scour occurred at the 500-year discharge. Left abutment scour ranged from 14.3 to 14.4 ft. The worst-case left abutment scour occurred at the incipient roadwayovertopping and 500-year discharge. Right abutment scour ranged from 15.3 to 18.5 ft. The worst-case right abutment scour occurred at the 100-year and the incipient roadwayovertopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) give “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  17. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  18. MP3 player listening sound pressure levels among 10 to 17 year old students.

    PubMed

    Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M

    2011-11-01

    Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤  75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.

  19. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  20. Level II scour analysis for Bridge 7 (CHARTH00010007) on Town Highway 1, crossing Mad Brook, Charleston, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.; Weber, Matthew A.

    1997-01-01

    Contraction scour for all modelled flows ranged from 0.0 to 0.3 ft. The worst-case contraction scour occurred at the incipient overtopping discharge, which was less than the 100-year discharge. Abutment scour ranged from 6.2 to 9.4 ft. The worst-case abutment scour for the right abutment was 9.4 feet at the 100-year discharge. The worst-case abutment scour for the left abutment was 8.6 feet at the incipient overtopping discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  1. Level II scour analysis for Bridge 16, (NEWBTH00500016) on Town Highway 50, crossing Halls Brook, Newbury, Vermont

    USGS Publications Warehouse

    Burns, Ronda L.; Degnan, James R.

    1997-01-01

    Contraction scour for all modelled flows ranged from 2.6 to 4.6 ft. The worst-case contraction scour occurred at the incipient roadway-overtopping discharge. The left abutment scour ranged from 11.6 to 12.1 ft. The worst-case left abutment scour occurred at the incipient road-overtopping discharge. The right abutment scour ranged from 13.6 to 17.9 ft. The worst-case right abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 46). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  2. Adhesive strength of total knee endoprostheses to bone cement - analysis of metallic and ceramic femoral components under worst-case conditions.

    PubMed

    Bergschmidt, Philipp; Dammer, Rebecca; Zietz, Carmen; Finze, Susanne; Mittelmeier, Wolfram; Bader, Rainer

    2016-06-01

    Evaluation of the adhesive strength of femoral components to the bone cement is a relevant parameter for predicting implant safety. In the present experimental study, three types of cemented femoral components (metallic, ceramic and silica/silane-layered ceramic) of the bicondylar Multigen Plus knee system, implanted on composite femora were analysed. A pull-off test with the femoral components was performed after different load and several cementing conditions (four groups and n=3 components of each metallic, ceramic and silica/silane-layered ceramic in each group). Pull-off forces were comparable for the metallic and the silica/silane-layered ceramic femoral components (mean 4769 N and 4298 N) under standard test condition, whereas uncoated ceramic femoral components showed reduced pull-off forces (mean 2322 N). Loading under worst-case conditions led to decreased adhesive strength by loosening of the interface implant and bone cement using uncoated metallic and ceramic femoral components, respectively. Silica/silane-coated ceramic components were stably fixed even under worst-case conditions. Loading under high flexion angles can induce interfacial tensile stress, which could promote early implant loosening. In conclusion, a silica/silane-coating layer on the femoral component increased their adhesive strength to bone cement. Thicker cement mantles (>2 mm) reduce adhesive strength of the femoral component and can increase the risk of cement break-off.

  3. Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome.

    PubMed

    Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack

    2016-05-10

    Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an 'overall' or 'worst' GS in biopsies series should be used. Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Using both 'worst' and 'overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the 'worst' grade is a valid prognostic measure.

  4. Stochastic Robust Mathematical Programming Model for Power System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  5. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  6. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    DOE PAGES

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less

  7. Worst-case space radiation environments for geocentric missions

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.; Seltzer, S. M.

    1976-01-01

    Worst-case possible annual radiation fluences of energetic charged particles in the terrestrial space environment, and the resultant depth-dose distributions in aluminum, were calculated in order to establish absolute upper limits to the radiation exposure of spacecraft in geocentric orbits. The results are a concise set of data intended to aid in the determination of the feasibility of a particular mission. The data may further serve as guidelines in the evaluation of standard spacecraft components. Calculations were performed for each significant particle species populating or visiting the magnetosphere, on the basis of volume occupied by or accessible to the respective species. Thus, magnetospheric space was divided into five distinct regions using the magnetic shell parameter L, which gives the approximate geocentric distance (in earth radii) of a field line's equatorial intersect.

  8. Source location impact on relative tsunami strength along the U.S. West Coast

    NASA Astrophysics Data System (ADS)

    Rasmussen, L.; Bromirski, P. D.; Miller, A. J.; Arcas, D.; Flick, R. E.; Hendershott, M. C.

    2015-07-01

    Tsunami propagation simulations are used to identify which tsunami source locations would produce the highest amplitude waves on approach to key population centers along the U.S. West Coast. The reasons for preferential influence of certain remote excitation sites are explored by examining model time sequences of tsunami wave patterns emanating from the source. Distant bathymetric features in the West and Central Pacific can redirect tsunami energy into narrow paths with anomalously large wave height that have disproportionate impact on small areas of coastline. The source region generating the waves can be as little as 100 km along a subduction zone, resulting in distinct source-target pairs with sharply amplified wave energy at the target. Tsunami spectral ratios examined for transects near the source, after crossing the West Pacific, and on approach to the coast illustrate how prominent bathymetric features alter wave spectral distributions, and relate to both the timing and magnitude of waves approaching shore. To contextualize the potential impact of tsunamis from high-amplitude source-target pairs, the source characteristics of major historical earthquakes and tsunamis in 1960, 1964, and 2011 are used to generate comparable events originating at the highest-amplitude source locations for each coastal target. This creates a type of "worst-case scenario," a replicate of each region's historically largest earthquake positioned at the fault segment that would produce the most incoming tsunami energy at each target port. An amplification factor provides a measure of how the incoming wave height from the worst-case source compares to the historical event.

  9. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  10. Mixture toxicity of six sulfonamides and their two transformation products to green algae Scenedesmus vacuolatus and duckweed Lemna minor.

    PubMed

    Białk-Bielińska, Anna; Caban, Magda; Pieczyńska, Aleksandra; Stepnowski, Piotr; Stolte, Stefan

    2017-04-01

    Since humans and ecosystems are continually exposed to a very complex and permanently changing mixture of chemicals, there is increasing concern in the general public about the potential adverse effects they may cause. Among all "emerging pollutants", pharmaceuticals in particular have raised great environmental concern. For these reasons the aim of our study was to evaluate the mixture toxicity of six antimicrobial sulfonamides (SAs) and their two most commonly identified degradation products - sulfanilic acid (SNA) and sulfanilamide (SN) - to limnic green algae Scenedesmus vacuolatus and duckweed Lemna minor. The ecotoxicological data for the single toxicity of SNA and SN towards selected organisms are presented. The concept of Concentration Addition (CA) was applied to estimate the effects, and less than additive effects were observed. In general terms, it seems sufficiently precautionary for the aquatic environment to consider the toxicity of a sulfonamide mixture as additive. The Concentration Addition model proves to be a reasonable worst-case estimation. Such a comparative study on the mixture toxicity of sulfonamides and their transformation products has been presented for the first time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Hepatitis Aand E Co-Infection with Worst Outcome.

    PubMed

    Saeed, Anjum; Cheema, Huma Arshad; Assiri, Asaad

    2016-06-01

    Infections are still a major problem in the developing countries like Pakistan because of poor sewage disposal and economic restraints. Acute viral hepatitis like Aand E are not uncommon in pediatric age group because of unhygienic food handling and poor sewage disposal, but majority recovers well without any complications. Co-infections are rare occurrences and physicians need to be well aware while managing such conditions to avoid worst outcome. Co-infection with hepatitis Aand E is reported occasionally in the literature, however, other concurrent infections such as hepatitis A with Salmonellaand hepatotropic viruses like viral hepatitis B and C are present in the literature. Co-infections should be kept in consideration when someone presents with atypical symptoms or unusual disease course like this presented case. We report here a girl child who had acute hepatitis A and E concurrent infections and presented with hepatic encephalopathy and had worst outcome, despite all the supportive measures being taken.

  12. Implementation of School Health Promotion: Consequences for Professional Assistance

    ERIC Educational Resources Information Center

    Boot, N. M. W. M.; de Vries, N. K.

    2012-01-01

    Purpose: This case study aimed to examine the factors influencing the implementation of health promotion (HP) policies and programs in secondary schools and the consequences for professional assistance. Design/methodology/approach: Group interviews were held in two schools that represented the best and worst case of implementation of a health…

  13. Compression in the Superintendent Ranks

    ERIC Educational Resources Information Center

    Saron, Bradford G.; Birchbauer, Louis J.

    2011-01-01

    Sadly, the fiscal condition of school systems now not only is troublesome, but in some cases has surpassed all expectations for the worst-case scenario. Among the states, one common response is to drop funding for public education to inadequate levels, leading to permanent program cuts, school closures, staff layoffs, district dissolutions and…

  14. A measurement technique of time-dependent dielectric breakdown in MOS capacitors

    NASA Technical Reports Server (NTRS)

    Li, S. P.

    1974-01-01

    The statistical nature of time-dependent dielectric breakdown characteristics in MOS capacitors was evidenced by testing large numbers of capacitors fabricated on single wafers. A multipoint probe and automatic electronic visual display technique are introduced that will yield statistical results which are necessary for the investigation of temperature, electric field, thermal annealing, and radiation effects in the breakdown characteristics, and an interpretation of the physical mechanisms involved. It is shown that capacitors of area greater than 0.002 sq cm may yield worst-case results, and that a multipoint probe of capacitors of smaller sizes can be used to obtain a profile of nonuniformities in the SiO2 films.

  15. Numerical simulations in the development of propellant management devices

    NASA Astrophysics Data System (ADS)

    Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael

    Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.

  16. Fast decoder for local quantum codes using Groebner basis

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2013-03-01

    Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.

  17. Do mood and the receipt of work-based support influence nurse perceived quality of care delivery? A behavioural diary study.

    PubMed

    Jones, Martyn C; Johnston, Derek

    2013-03-01

    To examine the effect of nurse mood in the worst event of shift (negative affect, positive affect), receipt of work-based support from managers and colleagues, colleague and patient involvement on perceived quality of care delivery. While the effect of the work environment on nurse mood is well documented, little is known about the effects of the worst event of shift on the quality of care delivered by nurses. This behavioural diary study employed a within-subject and between-subject designs incorporating both cross-sectional and longitudinal elements. One hundred and seventy-one nurses in four large district general hospitals in England completed end-of-shift computerised behavioural diaries over three shifts to explore the effects of the worst clinical incident of shift. Diaries measured negative affect, positive affect, colleague involvement, receipt of work-based support and perceived quality of care delivery. Analysis used multilevel modelling (MLWIN 2.19; Centre for Multi-level Modelling, University of Bristol, Bristol, UK). High levels of negative affect and low levels of positive affect reported in the worst clinical incident of shift were associated with reduced perceived quality of care delivery. Receipt of managerial support and its interaction with negative affect had no relationship with perceived quality of care delivery. Perceived quality of care delivery deteriorated the most when the nurse reported a combination of high negative affect and no receipt of colleague support in the worst clinical incident of shift. Perceived quality of care delivery was also particularly influenced when the nurse reported low positive affect and colleague actions contributed to the problem. Receipt of colleague support is particularly salient in protecting perceived quality of care delivery, especially if the nurse also reports high levels of negative affect in the worst event of shift. The effect of work-based support on care delivery is complex and requires further investigation. © 2012 Blackwell Publishing Ltd.

  18. Extensions of the Einstein-Schrodinger non-symmetric theory of gravity

    NASA Astrophysics Data System (ADS)

    Shifflett, James A.

    We modify the Einstein-Schrödinger theory to include a cosmological constant L z which multiplies the symmetric metric. The cosmological constant L z is assumed to be nearly cancelled by Schrödinger's cosmological constant L b which multiplies the nonsymmetric fundamental tensor, such that the total L = L z + L b matches measurement. The resulting theory becomes exactly Einstein-Maxwell theory in the limit as |L z | [arrow right] oo. For |L z | ~ 1/(Planck length) 2 the field equations match the ordinary Einstein and Maxwell equations except for extra terms which are < 10 -16 of the usual terms for worst-case field strengths and rates-of-change accessible to measurement. Additional fields can be included in the Lagrangian, and these fields may couple to the symmetric metric and the electromagnetic vector potential, just as in Einstein-Maxwell theory. The ordinary Lorentz force equation is obtained by taking the divergence of the Einstein equations when sources are included. The Einstein- Infeld-Hoffmann (EIH) equations of motion match the equations of motion for Einstein-Maxwell theory to Newtonian/Coulombian order, which proves the existence of a Lorentz force without requiring sources. An exact charged solution matches the Reissner-Nordström solution except for additional terms which are ~ 10 -66 of the usual terms for worst-case radii accessible to measurement. An exact electromagnetic plane-wave solution is identical to its counterpart in Einstein-Maxwell theory. Peri-center advance, deflection of light and time delay of light have a fractional difference of < 10 -56 compared to Einstein-Maxwell theory for worst-case parameters. When a spin-1/2 field is included in the Lagrangian, the theory gives the ordinary Dirac equation, and the charged solution results in fractional shifts of < 10 -50 in Hydrogen atom energy levels. Newman-Penrose methods are used to derive an exact solution of the connection equations, and to show that the charged solution is Petrov type- D like the Reissner-Nordström solution. The Newman-Penrose asymptotically flat [Special characters omitted.] (1/ r 2 ) expansion of the field equations is shown to match Einstein-Maxwell theory. Finally we generalize the theory to non-Abelian fields, and show that a special case of the resulting theory closely approximates Einstein-Weinberg-Salam theory.

  19. Fundamentals of Digital Engineering: Designing for Reliability

    NASA Technical Reports Server (NTRS)

    Katz, R.; Day, John H. (Technical Monitor)

    2001-01-01

    The concept of designing for reliability will be introduced along with a brief overview of reliability, redundancy and traditional methods of fault tolerance is presented, as applied to current logic devices. The fundamentals of advanced circuit design and analysis techniques will be the primary focus. The introduction will cover the definitions of key device parameters and how analysis is used to prove circuit correctness. Basic design techniques such as synchronous vs asynchronous design, metastable state resolution time/arbiter design, and finite state machine structure/implementation will be reviewed. Advanced topics will be explored such as skew-tolerant circuit design, the use of triple-modular redundancy and circuit hazards, device transients and preventative circuit design, lock-up states in finite state machines generated by logic synthesizers, device transient characteristics, radiation mitigation techniques. worst-case analysis, the use of timing analyzer and simulators, and others. Case studies and lessons learned from spaceflight designs will be given as examples

  20. Assessment of polychlorinated dibenzo-p-dioxins and dibenzofurans contribution from different media to surrounding duck farms.

    PubMed

    Lee, Wen-Jhy; Shih, Shun-I; Li, Hsing-Wang; Lin, Long-Full; Yu, Kuei-Min; Lu, Kueiwan; Wang, Lin-Chi; Chang-Chien, Guo-Ping; Fang, Kenneth; Lin, Mark

    2009-04-30

    Since the "Toxic Egg Event" broke out in central Taiwan, the possible sources of the high content of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in eggs have been a serious concern. In this study, the PCDD/F contents in different media (feed, soil and ambient air) were measured. Evaluation of the impact from electric arc furnace dust treatment plant (abbreviated as EAFDT plant), which is site-specific to the "Toxic Egg Event", on the duck total-PCDD/F daily intake was conducted by both Industrial Source Complex Short Term model (ISCST) and dry and wet deposition models. After different scenario simulations, the worst case was at farm A and at 200 g feed and 5 g soil for duck intake, and the highest PCDD/F contributions from the feed, original soil and stack flue gas were 44.92, 47.81, and 6.58%, respectively. Considering different uncertainty factors, such as the flow rate variation of stack flue gas and errors from modelling and measurement, the PCDD/F contribution fraction from the stack flue gas of EAFDT plant may increase up to twice as that for the worst case (6.58%) and become 13.2%, which was still much lower than that from the total contribution fraction (86.8%) of both feed and original soil. Fly ashes contained purposely in duck feed by the farmers was a potential major source for the duck daily intake. While the impact from EAFDT plant has been proven very minor, the PCDD/F content in the feed and soil, which was contaminated by illegal fly ash landfills, requires more attention.

  1. Evaluation and monitoring of UVR in Shield Metal ARC Welding processing.

    PubMed

    Peng, Chiung-yu; Liu, Hung-hsin; Chang, Cheng-ping; Shieh, Jeng-yueh; Lan, Cheng-hang

    2007-08-01

    This study established a comprehensive approach to monitoring UVR magnitude from Shield Metal Arc Welding (SMAW) processing and quantified the effective exposure based on measured data. The irradiances from welding UVR were calculated with biological effective parameter (Slambda) for human exposure assessment. The spectral weighting function for UVR measurement and evaluation followed the American Conference of Governmental Industrial Hygienists (ACGIH) guidelines. Arc welding processing scatters bright light with UVR emission over the full UV spectrum (UVA, UVB, and UVC). The worst case of effective irradiance from a 50 cm distance arc spot with a 200 A electric current and an electrode E6011 (4 mm) is 311.0 microW cm(-2) and has the maximum allowance time (Tmax) of 9.6 s. Distance is an important factor affecting the irradiance intensity. The worst case of the effective irradiance values from arc welding at 100, 200, and 300 cm distances are 76.2, 16.6, and 12.1 microW cm(-2) with Tmax of 39.4, 180.7, and 247.9 s, respectively. Protective materials (glove and mask) were demonstrated to protect workers from hazardous UVR exposure. From this study, the methodology of UVR monitoring in SMAW processing was developed and established. It is recommended that welders should be fitted with appropriate protective materials for protection from UVR emission hazards.

  2. All-sky search for periodic gravitational waves in the full S5 LIGO data

    NASA Astrophysics Data System (ADS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; Ajith, P.; Allen, B.; Allen, G. S.; Amador Ceron, E.; Amariutei, D.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Arain, M. A.; Araya, M. C.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Behnke, B.; Beker, M. G.; Bell, A. S.; Belletoile, A.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brummit, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Burmeister, O.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannizzo, J.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chaibi, O.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande-Mottin, E.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G.; Daw, E. J.; Day, R.; Dayanga, T.; de Rosa, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Del Prete, M.; Dent, T.; Dergachev, V.; Derosa, R.; Desalvo, R.; Dhurandhar, S.; di Fiore, L.; Diguglielmo, J.; di Lieto, A.; di Palma, I.; di Paolo Emilio, M.; di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Farr, W.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Flanigan, M.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P. J.; Fyffe, M.; Galimberti, M.; Gammaitoni, L.; Ganija, M. R.; Garcia, J.; Garofoli, J. A.; Garufi, F.; Gáspár, M. E.; Gemme, G.; Geng, R.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gill, C.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Gray, N.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Ha, T.; Hage, B.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Hayler, T.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hong, T.; Hooper, S.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; Jang, H.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kamaretsos, I.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B.; Kim, C.; Kim, D.; Kim, H.; Kim, K.; Kim, N.; Kim, Y.-M.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, R.; Kwee, P.; Lam, P. K.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Leaci, P.; Lee, C. H.; Lee, H. M.; Leindecker, N.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Liguori, N.; Lindquist, P. E.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Luan, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; MacDonald, E.; Machenschalk, B.; Macinnis, M.; MacLeod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marandi, A.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; McKechan, D. J. A.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menendez, D.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Moesta, P.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Mosca, S.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Nawrodt, R.; Necula, V.; Nelson, J.; Newton, G.; Nishizawa, A.; Nocera, F.; Nolting, D.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenburg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Peiris, P.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Phelps, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Prato, M.; Predoi, V.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C. R.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Redwine, K.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Ryll, H.; Sainathan, P.; Sakosky, M.; Salemi, F.; Samblowski, A.; Sammut, L.; Sancho de La Jordana, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schlamminger, S.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Smith, R. J. E.; Somiya, K.; Sorazu, B.; Soto, J.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Stein, A. J.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Tseng, K.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; van den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, X.; Wang, Z.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yu, P.; Yvert, M.; Zadroźny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhang, W.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2012-01-01

    We report on an all-sky search for periodic gravitational waves in the frequency band 50-800 Hz and with the frequency time derivative in the range of 0 through -6×10-9Hz/s. Such a signal could be produced by a nearby spinning and slightly nonaxisymmetric isolated neutron star in our Galaxy. After recent improvements in the search program that yielded a 10× increase in computational efficiency, we have searched in two years of data collected during LIGO’s fifth science run and have obtained the most sensitive all-sky upper limits on gravitational-wave strain to date. Near 150 Hz our upper limit on worst-case linearly polarized strain amplitude h0 is 1×10-24, while at the high end of our frequency range we achieve a worst-case upper limit of 3.8×10-24 for all polarizations and sky locations. These results constitute a factor of 2 improvement upon previously published data. A new detection pipeline utilizing a loosely coherent algorithm was able to follow up weaker outliers, increasing the volume of space where signals can be detected by a factor of 10, but has not revealed any gravitational-wave signals. The pipeline has been tested for robustness with respect to deviations from the model of an isolated neutron star, such as caused by a low-mass or long-period binary companion.

  3. Analytic Prediction of Emergent Dynamics for Autonomous Negotiating Team (ANT) Systems

    DTIC Science & Technology

    2003-11-01

    it is determined that a “phase transition” behavior is to be expected. 15. NUMBER OF PAGES 140 14. SUBJECT TERMS autonomous negotiation...parameter. Crisis has the worst asymptotic behavior of the three strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 iv 3.7...deadline, as opposed to harder with increasing communication time. Again, we see that the crisis strategy has the worst asymptotic behavior over the

  4. 40 CFR 85.2115 - Notification of intent to certify.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...

  5. 40 CFR 85.2115 - Notification of intent to certify.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...

  6. 40 CFR 85.2115 - Notification of intent to certify.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... testing and durability demonstration represent worst case with respect to emissions of all those... submitted by the aftermarket manufacturer to: Mod Director, MOD (EN-340F), Attention: Aftermarket Parts, 401...

  7. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  8. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  9. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  10. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  11. 10 CFR 434.501 - General.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... size and type will vary only with climate, the number of stories, and the choice of simulation tool... practice for some climates or buildings, but represent a reasonable worst case of energy cost resulting...

  12. Kinematic and fatigue biomechanics of an interpositional facet arthroplasty device.

    PubMed

    Dahl, Michael C; Freeman, Andrew L

    2016-04-01

    Although approximately 30% of chronic lumbar pain can be attributed to the facets, limited surgical options exist for patients. Interpositional facet arthroplasty (IFA) is a novel treatment for lumbar facetogenic pain designed to provide patients who gain insufficient relief from medical interventional treatment options with long-term relief, filling a void in the facet pain treatment continuum. This study aimed to quantify the effect of IFA on segmental range of motion (ROM) compared with the intact state, and to observe device position and condition after 10,000 cycles of worst-case loading. In situ biomechanical analysis of the lumbar spine following implantation of a novel IFA device was carried out. Twelve cadaveric functional spinal units (L2-L3 and L5-S1) were tested in 7.5 Nm flexion-extension, lateral bending, and torsion while intact and following device implantation. Additionally, specimens underwent 10,000 cycles of worst-case complex loading and were testing in ROM again. Load-displacement and fluoroscopic data were analyzed to determine ROM and to evaluate device position during cyclic testing. Devices and facets were evaluated post testing. Institutional support for implant evaluation was received by Zyga Technology. Range of motion post implantation decreased versus intact, and then was restored post cyclic-testing. Of the tested devices, 6.5% displayed slight movement (0.5-2 mm), all from tight L2-L3 facet joints with misplaced devices or insufficient cartilage. No damage was observed on the devices, and wear patterns were primarily linear. The results from this in situ cadaveric biomechanics and cyclic fatigue study demonstrate that a low-profile, conformable IFA device can maintain position and facet functionality post implantation and through 10,000 complex loading cycles. In vivo conditions were not accounted for in this model, which may affect implant behavior not predictable via a biomechanical study. However, these data along with published 1-year clinical results suggest that IFA may be a valid treatment option in patients with chronic lumbar zygapophysial pain who have exhausted medical interventional options. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Validation of a contemporary prostate cancer grading system using prostate cancer death as outcome

    PubMed Central

    Berney, Daniel M; Beltran, Luis; Fisher, Gabrielle; North, Bernard V; Greenberg, David; Møller, Henrik; Soosay, Geraldine; Scardino, Peter; Cuzick, Jack

    2016-01-01

    Background: Gleason scoring (GS) has major deficiencies and a novel system of five grade groups (GS⩽6; 3+4; 4+3; 8; ⩾9) has been recently agreed and included in the WHO 2016 classification. Although verified in radical prostatectomies using PSA relapse for outcome, it has not been validated using prostate cancer death as an outcome in biopsy series. There is debate whether an ‘overall' or ‘worst' GS in biopsies series should be used. Methods: Nine hundred and eighty-eight prostate cancer biopsy cases were identified between 1990 and 2003, and treated conservatively. Diagnosis and grade was assigned to each core as well as an overall grade. Follow-up for prostate cancer death was until 31 December 2012. A log-rank test assessed univariable differences between the five grade groups based on overall and worst grade seen, and using univariable and multivariable Cox proportional hazards. Regression was used to quantify differences in outcome. Results: Using both ‘worst' and ‘overall' GS yielded highly significant results on univariate and multivariate analysis with overall GS slightly but insignificantly outperforming worst GS. There was a strong correlation with the five grade groups and prostate cancer death. Conclusions: This is the largest conservatively treated prostate cancer cohort with long-term follow-up and contemporary assessment of grade. It validates the formation of five grade groups and suggests that the ‘worst' grade is a valid prognostic measure. PMID:27100731

  14. Long term elongation of Kevlar-49 single fiber at low temperature

    NASA Astrophysics Data System (ADS)

    Bersani, A.; Canonica, L.; Cariello, M.; Cereseto, R.; Di Domizio, S.; Pallavicini, M.

    2013-02-01

    We have measured the rate of elongation of a loaded Kevlar-49 fiber as a function of time at 4.2 K. The result puts a worst case upper limit of 0.028% in the elongation rate ΔL/L for a 0.5 mm diameter fiber kept under a constant tension of 2.7 kg for 8 months. A value that is probably closer to reality is actually 0.004%. This result proves that Kevlar-49 can be safely used in cryogenic applications in which high mechanical stability under stress is required.

  15. Electrical Characterization of Hughes HCMP 1853D and RCA CDP1853D N-bit, CMOS, 1-of-8 Decoder Microcircuits

    NASA Technical Reports Server (NTRS)

    Stokes, R. L.

    1979-01-01

    Electrical characterization tests were performed on two different manufactured types of integrated circuits. The devices were subjected to functional and AC and DC parametric tests at ambient temperatures of -55 C, -20 C, 25 C, 85 C, and 125 C. The data were analyzed and tabulated to show the effect of operating conditions on performance and to indicate parameter deviations among devices in each group. Accuracy was given precedence over test time efficiency where practical, and tests were designed to measure worst case performance.

  16. RMP*Comp

    EPA Pesticide Factsheets

    You can use this free software program to complete the Off-site Consequence Analyses (both worst case scenarios and alternative scenarios) required under the Risk Management Program rule, so that you don't have to do calculations by hand.

  17. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  18. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  19. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment >100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  20. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  1. 49 CFR 194.105 - Worst case discharge.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API standards API STD 620/650/653 10 Overfill protection standards API RP 2350 5 Testing/cathodic protection API...

  2. Calculations of the skyshine gamma-ray dose rates from independent spent fuel storage installations (ISFSI) under worst case accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, J.V. III; Cramer, S.N.; Knight, J.R.

    1980-09-01

    Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less

  3. Most Probable Fire Scenarios in Spacecraft and Extraterrestrial Habitats: Why NASA's Current Test 1 Might Not Always be Conservative

    NASA Technical Reports Server (NTRS)

    Olson, S. L.

    2004-01-01

    NASA's current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.

  4. Most Probable Fire Scenarios in Spacecraft and Extraterrestrial Habitats: Why NASA's Current Test 1 Might Not Always Be Conservative

    NASA Technical Reports Server (NTRS)

    Olson, S. L.

    2004-01-01

    NASA s current method of material screening determines fire resistance under conditions representing a worst-case for normal gravity flammability - the Upward Flame Propagation Test (Test 1[1]). Its simple pass-fail criteria eliminates materials that burn for more than 12 inches from a standardized ignition source. In addition, if a material drips burning pieces that ignite a flammable fabric below, it fails. The applicability of Test 1 to fires in microgravity and extraterrestrial environments, however, is uncertain because the relationship between this buoyancy-dominated test and actual extraterrestrial fire hazards is not understood. There is compelling evidence that the Test 1 may not be the worst case for spacecraft fires, and we don t have enough information to assess if it is adequate at Lunar or Martian gravity levels.

  5. LANDSAT-D MSS/TM tuned orbital jitter analysis model LDS900

    NASA Technical Reports Server (NTRS)

    Pollak, T. E.

    1981-01-01

    The final LANDSAT-D orbital dynamic math model (LSD900), comprised of all test validated substructures, was used to evaluate the jitter response of the MSS/TM experiments. A dynamic forced response analysis was performed at both the MSS and TM locations on all structural modes considered (thru 200 Hz). The analysis determined the roll angular response of the MSS/TM experiments to improve excitation generated by component operation. Cross axis and cross experiment responses were also calculated. The excitations were analytically represented by seven and nine term Fourier series approximations, for the MSS and TM experiment respectively, which enabled linear harmonic solution techniques to be applied to response calculations. Single worst case jitter was estimated by variations of the eigenvalue spectrum of model LSD 900. The probability of any worst case mode occurrence was investigated.

  6. An Alaskan Theater Airlift Model.

    DTIC Science & Technology

    1982-02-19

    overt attack on American soil . In any case, such a reaotion represents the worst-case scenario In that theater forces would be denied the advantages of...NNSETNTAFE,SS(l06), USL (100), 7 TNET,THOV,1X(100) REAL A,CHKTIN INTEGER ORIC,DEST,ISCTMP,WXFLG,ALLW,T(RT,ZPTR,ZONE, * FTNFLG.WX,ZONLST(150) DATA ZNSI

  7. 78 FR 49831 - Endangered and Threatened Wildlife and Plants; Proposed Designation of Critical Habitat for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-15

    ... Service (NPS) for the Florida leafwing and the pine rockland ecosystem, in general. Sea Level Rise... habitat. In the best case scenario, which assumes low sea level rise, high financial resources, proactive... human population. In the worst case scenario, which assumes high sea level rise, low financial resources...

  8. A Different Call to Arms: Women in the Core of the Communications Revolution.

    ERIC Educational Resources Information Center

    Rush, Ramona R.

    A "best case" model for the role of women in the postindustrial communications era predicts positive leadership roles based on the preindustrial work characteristics of cooperation and consensus. A "worst case" model finds women entrepreneurs succumbing to the competitive male ethos and extracting the maximum amount of work…

  9. A kernel regression approach to gene-gene interaction detection for case-control studies.

    PubMed

    Larson, Nicholas B; Schaid, Daniel J

    2013-11-01

    Gene-gene interactions are increasingly being addressed as a potentially important contributor to the variability of complex traits. Consequently, attentions have moved beyond single locus analysis of association to more complex genetic models. Although several single-marker approaches toward interaction analysis have been developed, such methods suffer from very high testing dimensionality and do not take advantage of existing information, notably the definition of genes as functional units. Here, we propose a comprehensive family of gene-level score tests for identifying genetic elements of disease risk, in particular pairwise gene-gene interactions. Using kernel machine methods, we devise score-based variance component tests under a generalized linear mixed model framework. We conducted simulations based upon coalescent genetic models to evaluate the performance of our approach under a variety of disease models. These simulations indicate that our methods are generally higher powered than alternative gene-level approaches and at worst competitive with exhaustive SNP-level (where SNP is single-nucleotide polymorphism) analyses. Furthermore, we observe that simulated epistatic effects resulted in significant marginal testing results for the involved genes regardless of whether or not true main effects were present. We detail the benefits of our methods and discuss potential genome-wide analysis strategies for gene-gene interaction analysis in a case-control study design. © 2013 WILEY PERIODICALS, INC.

  10. Monte Carlo simulation on the effect of different approaches to thalassaemia on gene frequency.

    PubMed

    Habibzadeh, F; Yadollahie, M

    2006-01-01

    We used computer simulation to determine variation in gene, heterozygous and homozygous frequencies induced by 4 different approaches to thalassaemia. These were: supportive therapy only; treat homozygous patients with a hypothetical modality phenotypically only; abort all homozygous fetuses; and prevent marriage between gene carriers. Gene frequency becomes constant with the second or the fourth strategy, and falls over time with the first or the third strategy. Heterozygous frequency varies in parallel with gene frequency. Using the first strategy, homozygous frequency falls over time; with the second strategy it becomes constant; and with the third and fourth strategies it falls to zero after the first generation. No matter which strategy is used, the population gene frequency, in the worst case, will remain constant over time.

  11. A computational evaluation of sedentary lifestyle effects on carotid hemodynamics and atherosclerotic events incidence.

    PubMed

    Caruso, Maria Vittoria; Serra, Raffaele; Perri, Paolo; Buffone, Gianluca; Caliò, Francesco Giuseppe; DE Franciscis, Stefano; Fragomeni, Fragomeni

    2017-01-01

    Hemodynamics has a key role in atheropathogenesis. Indeed, atherosclerotic phenomena occur in vessels characterized by complex geometry and flow pattern, like the carotid bifurcation. Moreover, lifestyle is a significant risk factor. The aim of this study is to evaluate the hemodynamic effects due to two sedentary lifestyles - sitting and standing positions - in the carotid bifurcation in order to identify the worst condition and to investigate the atherosclerosis incidence. The computational fluid dynamics (CFD) was chosen to carry out the analysis, in which in vivo non-invasive measurements were used as boundary conditions. Furthermore, to compare the two conditions, one patient-specific 3D model of a carotid bifurcation was reconstructed starting from computer tomography. Different mechanical indicators, correlated with atherosclerosis incidence, were calculated in addition to flow pattern and pressure distribution: the time average wall shear stress (TAWSS), the oscillatory shear index (OSI) and the relative residence time (RRT). The results showed that the bulb and the external carotid artery emergence are the most probable regions in which atherosclerotic events could happen. Indeed, low velocity and WSS values, high OSI and, as a consequence, areas with chaotic-swirling flow, with stasis (high RRT), occur. Moreover, the sitting position is the worst condition: considering a cardiac cycle, TAWSS is less than 17.2% and OSI and RRT are greater than 17.5% and 21.2%, respectively. This study suggests that if a person spends much time in the sitting position, a high risk of plaque formation and, consequently, of stenosis could happen.

  12. Environmental Education, Activism and the Arts

    ERIC Educational Resources Information Center

    Branagan, Martin

    2005-01-01

    The global military-industrial complex is the world's worst polluter, so non-violence is a vital part of a sustainable world. Non-violent activism and education often occur simultaneously, with direct action frequently a dramatic attempt to educate audiences. Therefore, this paper discusses how the arts benefit both educative and non-violent…

  13. RMP Guidance for Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Offsite consequence analysis (OCA) consists of a worst-case release scenario and alternative release scenarios. OCA is required from facilities with chemicals above threshold quantities. RMP*Comp software can be used to perform calculations described here.

  14. Investigating Premature Ignition of Thruster Pressure Cartridges by Vibration-Induced Electrostatic Discharge

    NASA Technical Reports Server (NTRS)

    Woods, Stephen S.; Saulsberry, Regor

    2010-01-01

    Pyrotechnic thruster pressure cartridges (TPCs) are used for aeroshell separation on a new NASA crew launch vehicle. Nondestructive evaluation (NDE) during TPC acceptance testing indicated that internal assemblies moved during shock and vibration testing due to an internal bond anomaly. This caused concerns that the launch environment might produce the same movement and release propellant grains that might be prematurely ignited through impact or through electrostatic discharge (ESD) as grains vibrated against internal surfaces. Since a new lot could not be fabricated in time, a determination had to be made as to whether the lot was acceptable to fly. This paper discusses the ESD evaluation and a separate paper addresses the impact problem. A challenge to straight forward assessment existed due to the unavailability of triboelectric data characterizing the static charging characteristics of the propellants within the TPC. The approach examined the physical limitations for charge buildup within the TPC system geometry and evaluated it for discharge under simulated vibrations used to qualify components for launch. A facsimile TPC was fabricated using SS 301 for the case and surrogate worst case materials for the propellants based on triboelectric data. System discharge behavior was evaluated by applying high voltage to the point of discharge in air and by placing worst case charge accumulations within the facsimile TPC and forcing discharge. The facsimile TPC contained simulated propellant grains and lycopodium, a well characterized indicator for static discharge in dust explosions, and was subjected to accelerations equivalent to the maximum accelerations possible during launch. The magnitude of charge generated within the facsimile TPC system was demonstrated to lie in a range of 100 to 10,000 times smaller than the spark energies measured to ignite propellant grains in industry standard discharge tests. The test apparatus, methodology, and results are described in this paper.

  15. Collective doses to man from dumping of radioactive waste in the Arctic Seas.

    PubMed

    Nielsen, S P; Iosjpe, M; Strand, P

    1997-08-25

    A box model for the dispersion of radionuclides in the marine environment covering the Arctic Ocean and the North Atlantic Ocean has been constructed. Collective doses from ingestion pathways have been calculated from unit releases of the radionuclides 3H, 60Co, 63Ni, 90Sr, 129I, 137Cs, 239Pu and 241Am into a fjord on the east coast of NovayaZemlya. The results show that doses for the shorter-lived radionuclides (e.g. 137Cs) are derived mainly from seafood production in the Barents Sea. Doses from the longer-lived radionuclides (e.g. 239Pu) are delivered through marine produce further away from the Arctic Ocean. Collective doses were calculated for two release scenarios, both of which are based on information of the dumping of radioactive waste in the Barents and Kara Seas by the former Soviet Union and on preliminary information from the International Arctic Sea Assessment Programme. A worst-case scenario was assumed according to which all radionuclides in liquid and solid radioactive waste were available for dispersion in the marine environment at the time of dumping. Release of radionuclides from spent nuclear fuel was assumed to take place by direct corrosion of the fuel ignoring the barriers that prevent direct contact between the fuel and the seawater. The second scenario selected assumed that releases of radionuclides from spent nuclear fuel do not occur until after failure of the protective barriers. All other liquid and solid radioactive waste was assumed to be available for dispersion at the time of discharge in both scenarios. The estimated collective dose for the worst-case scenario was about 9 manSv and that for the second scenario was about 3 manSv. In both cases, 137Cs is the radionuclide predicted to dominate the collective doses as well as the peak collective dose rates.

  16. Avoiding verisimilitude when modelling ecological responses to climate change: the influence of weather conditions on trapping efficiency in European badgers (Meles meles).

    PubMed

    Noonan, Michael J; Rahman, M Abidur; Newman, Chris; Buesching, Christina D; Macdonald, David W

    2015-10-01

    The signal for climate change effects can be abstruse; consequently, interpretations of evidence must avoid verisimilitude, or else misattribution of causality could compromise policy decisions. Examining climatic effects on wild animal population dynamics requires ability to trap, observe or photograph and to recapture study individuals consistently. In this regard, we use 19 years of data (1994-2012), detailing the life histories on 1179 individual European badgers over 3288 (re-) trapping events, to test whether trapping efficiency was associated with season, weather variables (both contemporaneous and time lagged), body-condition index (BCI) and trapping efficiency (TE). PCA factor loadings demonstrated that TE was affected significantly by temperature and precipitation, as well as time lags in these variables. From multi-model inference, BCI was the principal driver of TE, where badgers in good condition were less likely to be trapped. Our analyses exposed that this was enacted mechanistically via weather variables driving BCI, affecting TE. Notably, the very conditions that militated for poor trapping success have been associated with actual survival and population abundance benefits in badgers. Using these findings to parameterize simulations, projecting best-/worst-case scenario weather conditions and BCI resulted in 8.6% ± 4.9 SD difference in seasonal TE, leading to a potential 55.0% population abundance under-estimation under the worst-case scenario; 38.6% over-estimation under the best case. Interestingly, simulations revealed that while any single trapping session might prove misrepresentative of the true population abundance, due to weather effects, prolonging capture-mark-recapture studies under sub-optimal conditions decreased the accuracy of population estimates significantly. We also use these projection scenarios to explore how weather could impact government-led trapping of badgers in the UK, in relation to TB management. We conclude that population monitoring must be calibrated against the likelihood that weather conditions could be altering trap success directly, and therefore biasing model design. © 2015 John Wiley & Sons Ltd.

  17. Valuing Treatments for Parkinson Disease Incorporating Process Utility: Performance of Best-Worst Scaling, Time Trade-Off, and Visual Analogue Scales.

    PubMed

    Weernink, Marieke G M; Groothuis-Oudshoorn, Catharina G M; IJzerman, Maarten J; van Til, Janine A

    2016-01-01

    The objective of this study was to compare treatment profiles including both health outcomes and process characteristics in Parkinson disease using best-worst scaling (BWS), time trade-off (TTO), and visual analogue scales (VAS). From the model comprising of seven attributes with three levels, six unique profiles were selected representing process-related factors and health outcomes in Parkinson disease. A Web-based survey (N = 613) was conducted in a general population to estimate process-related utilities using profile-based BWS (case 2), multiprofile-based BWS (case 3), TTO, and VAS. The rank order of the six profiles was compared, convergent validity among methods was assessed, and individual analysis focused on the differentiation between pairs of profiles with methods used. The aggregated health-state utilities for the six treatment profiles were highly comparable for all methods and no rank reversals were identified. On the individual level, the convergent validity between all methods was strong; however, respondents differentiated less in the utility of closely related treatment profiles with a VAS or TTO than with BWS. For TTO and VAS, this resulted in nonsignificant differences in mean utilities for closely related treatment profiles. This study suggests that all methods are equally able to measure process-related utility when the aim is to estimate the overall value of treatments. On an individual level, such as in shared decision making, BWS allows for better prioritization of treatment alternatives, especially if they are closely related. The decision-making problem and the need for explicit trade-off between attributes should determine the choice for a method. Copyright © 2016. Published by Elsevier Inc.

  18. A learning approach to the bandwidth multicolouring problem

    NASA Astrophysics Data System (ADS)

    Akbari Torkestani, Javad

    2016-05-01

    In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.

  19. Planning Education for Regional Economic Integration: The Case of Paraguay and MERCOSUR.

    ERIC Educational Resources Information Center

    McGinn, Noel

    This paper examines the possible impact of MERCOSUR on Paraguay's economic and educational systems. MERCOSUR is a trade agreement among Argentina, Brazil, Paraguay, and Uruguay, under which terms all import tariffs among the countries will be eliminated by 1994. The countries will enter into a common economic market. The worst-case scenario…

  20. Carbon monoxide screen for signalized intersections COSIM, version 3.0 : technical documentation.

    DOT National Transportation Integrated Search

    2008-07-01

    The Illinois Department of Transportation (IDOT) currently uses the computer screening model Illinois : CO Screen for Intersection Modeling (COSIM) to estimate worst-case CO concentrations for proposed roadway : projects affecting signalized intersec...

  1. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  2. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  3. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  4. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  5. 40 CFR 68.25 - Worst-case release scenario analysis.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... used is based on TNT equivalent methods. (1) For regulated flammable substances that are normally gases... shall be used to determine the distance to the explosion endpoint if the model used is based on TNT...

  6. RMP Guidance for Warehouses - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Offsite consequence analysis (OCA) informs government and the public about potential consequences of an accidental toxic or flammable chemical release at your facility, and consists of a worst-case release scenario and alternative release scenarios.

  7. RMP Guidance for Chemical Distributors - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    How to perform the OCA for regulated substances, informing the government and the public about potential consequences of an accidental chemical release at your facility. Includes calculations for worst-case scenario, alternative scenarios, and endpoints.

  8. Gum Disease

    MedlinePlus

    ... damage to the tissue and bone supporting the teeth. In the worst cases, you can lose teeth. In gingivitis, the gums become red and swollen. ... flossing and regular cleanings by a dentist or dental hygienist. Untreated gingivitis can lead to periodontitis. If ...

  9. Area- and energy-efficient CORDIC accelerators in deep sub-micron CMOS technologies

    NASA Astrophysics Data System (ADS)

    Vishnoi, U.; Noll, T. G.

    2012-09-01

    The COordinate Rotate DIgital Computer (CORDIC) algorithm is a well known versatile approach and is widely applied in today's SoCs for especially but not restricted to digital communications. Dedicated CORDIC blocks can be implemented in deep sub-micron CMOS technologies at very low area and energy costs and are attractive to be used as hardware accelerators for Application Specific Instruction Processors (ASIPs). Thereby, overcoming the well known energy vs. flexibility conflict. Optimizing Global Navigation Satellite System (GNSS) receivers to reduce the hardware complexity is an important research topic at present. In such receivers CORDIC accelerators can be used for digital baseband processing (fixed-point) and in Position-Velocity-Time estimation (floating-point). A micro architecture well suited to such applications is presented. This architecture is parameterized according to the wordlengths as well as the number of iterations and can be easily extended for floating point data format. Moreover, area can be traded for throughput by partially or even fully unrolling the iterations, whereby the degree of pipelining is organized with one CORDIC iteration per cycle. From the architectural description, the macro layout can be generated fully automatically using an in-house datapath generator tool. Since the adders and shifters play an important role in optimizing the CORDIC block, they must be carefully optimized for high area and energy efficiency in the underlying technology. So, for this purpose carry-select adders and logarithmic shifters have been chosen. Device dimensioning was automatically optimized with respect to dynamic and static power, area and performance using the in-house tool. The fully sequential CORDIC block for fixed-point digital baseband processing features a wordlength of 16 bits, requires 5232 transistors, which is implemented in a 40-nm CMOS technology and occupies a silicon area of 1560 μm2 only. Maximum clock frequency from circuit simulation of extracted netlist is 768 MHz under typical, and 463 MHz under worst case technology and application corner conditions, respectively. Simulated dynamic power dissipation is 0.24 uW MHz-1 at 0.9 V; static power is 38 uW in slow corner, 65 uW in typical corner and 518 uW in fast corner, respectively. The latter can be reduced by 43% in a 40-nm CMOS technology using 0.5 V reverse-backbias. These features are compared with the results from different design styles as well as with an implementation in 28-nm CMOS technology. It is interesting that in the latter case area scales as expected, but worst case performance and energy do not scale well anymore.

  10. The Effect of Reaction Control System Thruster Plume Impingement on Orion Service Module Solar Array Power Production

    NASA Technical Reports Server (NTRS)

    Bury, Kristen M.; Kerslake, Thomas W.

    2008-01-01

    NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.

  11. Response of the North American corn belt to climate warming, CO2

    NASA Astrophysics Data System (ADS)

    1983-08-01

    The climate of the North American corn belt was characterized to estimate the effects of climatic change on that agricultural region. Heat and moisture characteristics of the current corn belt were identified and mapped based on a simulated climate for a doubling of atmospheric CO2 concentrations. The result was a map of the projected corn belt corresponding to the simulated climatic change. Such projections were made with and without an allowance for earlier planting dates that could occur under a CO2-induced climatic warming. Because the direct effects of CO2 increases on plants, improvements in farm technology, and plant breeding are not considered, the resulting projections represent an extreme or worst case. The results indicate that even for such a worst case, climatic conditions favoring corn production would not extend very far into Canada. Climatic buffering effects of the Great Lakes would apparently retard northeastward shifts in corn-belt location.

  12. Performance of a normalized energy metric without jammer state information for an FH/MFSK system in worst case partial band jamming

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1985-01-01

    For a frequency-hopped noncoherent MFSK communication system without jammer state information (JSI) in a worst case partial band jamming environment, it is well known that the use of a conventional unquantized metric results in very poor performance. In this paper, a 'normalized' unquantized energy metric is suggested for such a system. It is shown that with this metric, one can save 2-3 dB in required signal energy over the system with hard decision metric without JSI for the same desired performance. When this very robust metric is compared to the conventional unquantized energy metric with JSI, the loss in required signal energy is shown to be small. Thus, the use of this normalized metric provides performance comparable to systems for which JSI is known. Cutoff rate and bit error rate with dual-k coding are used for the performance measures.

  13. Centaur Propellant Thermal Conditioning Study

    NASA Technical Reports Server (NTRS)

    Blatt, M. H.; Pleasant, R. L.; Erickson, R. C.

    1976-01-01

    A wicking investigation revealed that passive thermal conditioning was feasible and provided considerable weight advantage over active systems using throttled vent fluid in a Centaur D-1s launch vehicle. Experimental wicking correlations were obtained using empirical revisions to the analytical flow model. Thermal subcoolers were evaluated parametrically as a function of tank pressure and NPSP. Results showed that the RL10 category I engine was the best candidate for boost pump replacement and the option showing the lowest weight penalty employed passively cooled acquisition devices, thermal subcoolers, dry ducts between burns and pumping of subcooler coolant back into the tank. A mixing correlation was identified for sizing the thermodynamic vent system mixer. Worst case mixing requirements were determined by surveying Centaur D-1T, D-1S, IUS, and space tug vehicles. Vent system sizing was based upon worst case requirements. Thermodynamic vent system/mixer weights were determined for each vehicle.

  14. VEGA Launch Vehicle Dynamic Environment: Flight Experience and Qualification Status

    NASA Astrophysics Data System (ADS)

    Di Trapani, C.; Fotino, D.; Mastrella, E.; Bartoccini, D.; Bonnet, M.

    2014-06-01

    VEGA Launch Vehicle (LV) during flight is equipped with more than 400 sensors (pressure transducers, accelerometers, microphones, strain gauges...) aimed to catch the physical phenomena occurring during the mission. Main objective of these sensors is to verify that the flight conditions are compliant with the launch vehicle and satellite qualification status and to characterize the phenomena that occur during flight. During VEGA development, several test campaigns have been performed in order to characterize its dynamic environment and identify the worst case conditions, but only with the flight data analysis is possible to confirm the worst cases identified and check the compliance of the operative life conditions with the components qualification status.Scope of the present paper is to show a comparison of the sinusoidal dynamic phenomena that occurred during VEGA first and second flight and give a summary of the launch vehicle qualification status.

  15. The Effect of Reaction Control System Thruster Plume Impingement on Orion Service Module Solar Array Power Production

    NASA Astrophysics Data System (ADS)

    Bury, Kristen M.; Kerslake, Thomas W.

    2008-06-01

    NASA's new Orion Crew Exploration Vehicle has geometry that orients the reaction control system (RCS) thrusters such that they can impinge upon the surface of Orion's solar array wings (SAW). Plume impingement can cause Paschen discharge, chemical contamination, thermal loading, erosion, and force loading on the SAW surface, especially when the SAWs are in a worst-case orientation (pointed 45 towards the aft end of the vehicle). Preliminary plume impingement assessment methods were needed to determine whether in-depth, timeconsuming calculations were required to assess power loss. Simple methods for assessing power loss as a result of these anomalies were developed to determine whether plume impingement induced power losses were below the assumed contamination loss budget of 2 percent. This paper details the methods that were developed and applies them to Orion's worst-case orientation.

  16. Statistical analysis of QC data and estimation of fuel rod behaviour

    NASA Astrophysics Data System (ADS)

    Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.

    1991-02-01

    The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.

  17. ASTM F1717 standard for the preclinical evaluation of posterior spinal fixators: can we improve it?

    PubMed

    La Barbera, Luigi; Galbusera, Fabio; Villa, Tomaso; Costa, Francesco; Wilke, Hans-Joachim

    2014-10-01

    Preclinical evaluation of spinal implants is a necessary step to ensure their reliability and safety before implantation. The American Society for Testing and Materials reapproved F1717 standard for the assessment of mechanical properties of posterior spinal fixators, which simulates a vertebrectomy model and recommends mimicking vertebral bodies using polyethylene blocks. This set-up should represent the clinical use, but available data in the literature are few. Anatomical parameters depending on the spinal level were compared to published data or measurements on biplanar stereoradiography on 13 patients. Other mechanical variables, describing implant design were considered, and all parameters were investigated using a numerical parametric finite element model. Stress values were calculated by considering either the combination of the average values for each parameter or their worst-case combination depending on the spinal level. The standard set-up represents quite well the anatomy of an instrumented average thoracolumbar segment. The stress on the pedicular screw is significantly influenced by the lever arm of the applied load, the unsupported screw length, the position of the centre of rotation of the functional spine unit and the pedicular inclination with respect to the sagittal plane. The worst-case combination of parameters demonstrates that devices implanted below T5 could potentially undergo higher stresses than those described in the standard suggestions (maximum increase of 22.2% at L1). We propose to revise F1717 in order to describe the anatomical worst case condition we found at L1 level: this will guarantee higher safety of the implant for a wider population of patients. © IMechE 2014.

  18. Topical Backgrounder: Evaluating Chemical Hazards in the Community: Using RMP's Offsite Consequence Analysis

    EPA Pesticide Factsheets

    Part of a May 1999 series on the Risk Management Program Rule and issues related to chemical emergency management. Explains hazard versus risk, worst-case and alternative release scenarios, flammable endpoints and toxic endpoints.

  19. General RMP Guidance - Chapter 4: Offsite Consequence Analysis

    EPA Pesticide Factsheets

    This chapter provides basic compliance information, not modeling methodologies, for people who plan to do their own air dispersion modeling. OCA is a required part of the risk management program, and involves worst-case and alternative release scenarios.

  20. INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS

    EPA Science Inventory

    The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...

  1. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... limits of current technology, for the range of environmental conditions anticipated at your facility; and... Society for Testing of Materials (ASTM) publication F625-94, Standard Practice for Describing...

  2. 30 CFR 254.26 - What information must I include in the “Worst case discharge scenario” appendix?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., materials, support vessels, and strategies listed are suitable, within the limits of current technology, for... equipment. Examples of acceptable terms include those defined in American Society for Testing of Materials...

  3. All-Sky Search for Periodic Gravitational Waves in the Full S5 LIGO Data

    NASA Technical Reports Server (NTRS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; hide

    2011-01-01

    We report on an all-sky search for periodic gravitational waves in the frequency band 50-800 Hz and with the frequency time derivative in the range of 0 through -6 x 10(exp -9) Hz/s. Such a signal could be produced by a nearby spinning and slightly non-axisymmetric isolated neutron star in our galaxy. After recent improvements in the search program that yielded a 10x increase in computational efficiency, we have searched in two years of data. collected during LIGO's fifth science run and have obtained the most sensitive all-sky upper limits on gravitational wave strain to date. Near 150 Hz our upper limit on worst-case linearly polarized strain amplitude h(sub 0) is 1 x 10(exp -24), while at the high end of our frequency ra.nge we achieve a worst-case upper limit of 3.8 x 10(exp -24) for all polarizations and sky locations. These results constitute a factor of two improvement upop. previously published data. A new detection pipeline utilizing a Loosely Coherent algorithm was able to follow up weaker outliers, increasing the volume of space where signals can be detected by a factor of 10, but has not revealed any gravitational wave signals. The pipeline has been tested for robustness with respect to deviations from the model of an isolated neutron star, such as caused by a low-mass or long.period binary companion.

  4. Long-lasting permethrin-impregnated clothing: protective efficacy against malaria in hyperendemic foci, and laundering, wearing, and weathering effects on residual bioactivity after worst-case use in the rain forests of French Guiana.

    PubMed

    Most, Bruno; Pommier de Santi, Vincent; Pagès, Frédéric; Mura, Marie; Uedelhoven, Waltraud M; Faulde, Michael K

    2017-02-01

    Personal protective measures against hematophagous vectors constitute the first line of defense against arthropod-borne diseases. However, guidelines for the standardized testing and licensing of insecticide-treated clothing are still lacking. The aim of this study was to analyze the preventive effect of long-lasting polymer-coated permethrin-impregnated clothing (PTBDU) against malaria after exposure to high-level disease transmission sites as well as the corresponding loss of permethrin and bioactivity during worst-case field use. Between August 2011 and June 2012, 25 personnel wearing PTBDUs and exposed for 9.5 person-months in hyperendemic malaria foci in the rain forest of French Guiana contracted no cases of malaria, whereas 125 persons wearing untreated uniforms only, exposed for 30.5 person-months, contracted 11 cases of malaria, indicating that PTBDU use significantly (p = 0.0139) protected against malaria infection. In the field, PTBDUs were laundered between 1 and 218 times (mean 25.2 ± 44.8). After field use, the mean remaining permethrin concentration in PTBDU fabric was 732.1 ± 321.1 min varying between 130 and 1270 mg/m 2 (mean 743.9 ± 304.2 mg/m 2 ) in blouses, and between 95 and 1290 mg/m 2 (mean 720.2 ± 336.9 mg/m 2 ) in trousers. Corresponding bioactivity, measured according to internal licensing conditions as KD 99 times against Aedes aegypti mosquitoes, varied between 27.5 and 142.5 min (mean 47.7 ± 22.1 min) for blouses, and between 25.0 and 360 min (mean 60.2 ± 66.1 min) for trousers. We strongly recommend the use of long-lasting permethrin-impregnated clothing for the prevention of mosquito-borne diseases, including chikungunya, dengue, and zika fevers, which are currently resurging globally.

  5. Synthesis and operation of an FFT-decoupled fixed-order reversed-field pinch plasma control system based on identification data

    NASA Astrophysics Data System (ADS)

    Olofsson, K. Erik J.; Brunsell, Per R.; Witrant, Emmanuel; Drake, James R.

    2010-10-01

    Recent developments and applications of system identification methods for the reversed-field pinch (RFP) machine EXTRAP T2R have yielded plasma response parameters for decoupled dynamics. These data sets are fundamental for a real-time implementable fast Fourier transform (FFT) decoupled discrete-time fixed-order strongly stabilizing synthesis as described in this work. Robustness is assessed over the data set by bootstrap calculation of the sensitivity transfer function worst-case H_{\\infty} -gain distribution. Output tracking and magnetohydrodynamic mode m = 1 tracking are considered in the same framework simply as two distinct weighted traces of a performance channel output-covariance matrix as derived from the closed-loop discrete-time Lyapunov equation. The behaviour of the resulting multivariable controller is investigated with dedicated T2R experiments.

  6. Design data package and operating procedures for MSFC solar simulator test facility

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Design and operational data for the solar simulator test facility are reviewed. The primary goal of the facility is to evaluate the performance capacibility and worst case failure modes of collectors, which utilize either air or liquid transport media. The facility simulates environmental parameters such as solar radiation intensity, solar spectrum, collimation, uniformity, and solar attitude. The facility also simulates wind conditions of velocity and direction, solar system conditions imposed on the collector, collector fluid inlet temperature, and geometric factors of collector tilt and azimuth angles. Testing the simulator provides collector efficiency data, collector time constant, incident angle modifier data, and stagnation temperature values.

  7. Stressful life events and catechol-O-methyl-transferase (COMT) gene in bipolar disorder.

    PubMed

    Hosang, Georgina M; Fisher, Helen L; Cohen-Woods, Sarah; McGuffin, Peter; Farmer, Anne E

    2017-05-01

    A small body of research suggests that gene-environment interactions play an important role in the development of bipolar disorder. The aim of the present study is to contribute to this work by exploring the relationship between stressful life events and the catechol-O-methyl-transferase (COMT) Val 158 Met polymorphism in bipolar disorder. Four hundred eighty-two bipolar cases and 205 psychiatrically healthy controls completed the List of Threatening Experiences Questionnaire. Bipolar cases reported the events experienced 6 months before their worst depressive and manic episodes; controls reported those events experienced 6 months prior to their interview. The genotypic information for the COMT Val 158 Met variant (rs4680) was extracted from GWAS analysis of the sample. The impact of stressful life events was moderated by the COMT genotype for the worst depressive episode using a Val dominant model (adjusted risk difference = 0.09, 95% confidence intervals = 0.003-0.18, P = .04). For the worst manic episodes no significant interactions between COMT and stressful life events were detected. This is the first study to explore the relationship between stressful life events and the COMT Val 158 Met polymorphism focusing solely on bipolar disorder. The results of this study highlight the importance of the interplay between genetic and environmental factors for bipolar depression. © 2017 Wiley Periodicals, Inc.

  8. VLBI and GPS-based Time-Transfer Using CONT08 Data

    NASA Technical Reports Server (NTRS)

    Rieck, Carsten; Haas, Ruediger; Jaldehag, Kenneth; Jahansson, Jan

    2010-01-01

    One important prerequisite for geodetic Very Long Baseline Interferometry (VLBI) is the use of frequency standards with excellent short term stability. This makes VLBI stations, which are often co-located with Global Navigation Satellite System (GNSS) receiving stations, interesting for studies of time- and frequency-transfer techniques. We present an assessment of VLBI time-transfer based on the data of the two week long consecutive IVS CONT08 VLBI campaign by using GPS Carrier Phase (GPSCP). CONT08 was a 15 day long campaign in August 2008 that involved eleven VLBI stations on five continents. For CONT08 we estimated the worst case VLBI frequency link stability between the stations of Onsala and Wettzell to 1e-15 at one day. Comparisons with GPSCP confirm the VLBI results. We also identify time-transfer related challenges of the VLBI technique as used today.

  9. "Just Let the Worst Students Go": A Critical Case Analysis of Public Discourse about Race, Merit, and Worth

    ERIC Educational Resources Information Center

    Zirkel, Sabrina; Pollack, Terry M.

    2016-01-01

    We present a case analysis of the controversy and public debate generated from a school district's efforts to address racial inequities in educational outcomes by diverting special funds from the highest performing students seeking elite college admissions to the lowest performing students who were struggling to graduate from high school.…

  10. Beyond the Moscow Treaty: Alternative Perspectives on the Future Roles and Utility of Nuclear Weapons

    DTIC Science & Technology

    2008-03-01

    Adversarial Tripolarity ................................................................................... VII-1 VIII. Fallen Nuclear Dominoes...power dimension, it is possible to imagine a best case (deep concert) and a worst case (adversarial tripolarity ) and some less extreme outcomes, one...vanquished and the sub-regions have settled into relative stability). 5. Adversarial U.S.-Russia-China tripolarity : In this world, the regional

  11. Elementary Social Studies in 2005: Danger or Opportunity?--A Response to Jeff Passe

    ERIC Educational Resources Information Center

    Libresco, Andrea S.

    2006-01-01

    From the emphasis on lower-level test-prep materials to the disappearance of the subject altogether, elementary social studies is, in the best case scenario, being tested and, thus, taught with a heavy emphasis on recall; and, in the worst-case scenario, not being taught at all. In this article, the author responds to Jeff Passe's views on…

  12. Thermal Analysis of a Metallic Wing Glove for a Mach-8 Boundary-Layer Experiment

    NASA Technical Reports Server (NTRS)

    Gong, Leslie; Richards, W. Lance

    1998-01-01

    A metallic 'glove' structure has been built and attached to the wing of the Pegasus(trademark) space booster. An experiment on the upper surface of the glove has been designed to help validate boundary-layer stability codes in a free-flight environment. Three-dimensional thermal analyses have been performed to ensure that the glove structure design would be within allowable temperature limits in the experiment test section of the upper skin of the glove. Temperature results obtained from the design-case analysis show a peak temperature at the leading edge of 490 F. For the upper surface of the glove, approximately 3 in. back from the leading edge, temperature calculations indicate transition occurs at approximately 45 sec into the flight profile. A worst-case heating analysis has also been performed to ensure that the glove structure would not have any detrimental effects on the primary objective of the Pegasus a launch. A peak temperature of 805 F has been calculated on the leading edge of the glove structure. The temperatures predicted from the design case are well within the temperature limits of the glove structure, and the worst-case heating analysis temperature results are acceptable for the mission objectives.

  13. Dominance, biomass and extinction resistance determine the consequences of biodiversity loss for multiple coastal ecosystem processes.

    PubMed

    Davies, Thomas W; Jenkins, Stuart R; Kingham, Rachel; Kenworthy, Joseph; Hawkins, Stephen J; Hiddink, Jan G

    2011-01-01

    Key ecosystem processes such as carbon and nutrient cycling could be deteriorating as a result of biodiversity loss. However, currently we lack the ability to predict the consequences of realistic species loss on ecosystem processes. The aim of this study was to test whether species contributions to community biomass can be used as surrogate measures of their contribution to ecosystem processes. These were gross community productivity in a salt marsh plant assemblage and an intertidal macroalgae assemblage; community clearance of microalgae in sessile suspension feeding invertebrate assemblage; and nutrient uptake in an intertidal macroalgae assemblage. We conducted a series of biodiversity manipulations that represented realistic species extinction sequences in each of the three contrasting assemblages. Species were removed in a subtractive fashion so that biomass was allowed to vary with each species removal, and key ecosystem processes were measured at each stage of community disassembly. The functional contribution of species was directly proportional to their contribution to community biomass in a 1:1 ratio, a relationship that was consistent across three contrasting marine ecosystems and three ecosystem processes. This suggests that the biomass contributed by a species to an assemblage can be used to approximately predict the proportional decline in an ecosystem process when that species is lost. Such predictions represent "worst case scenarios" because, over time, extinction resilient species can offset the loss of biomass associated with the extinction of competitors. We also modelled a "best case scenario" that accounts for compensatory responses by the extant species with the highest per capita contribution to ecosystem processes. These worst and best case scenarios could be used to predict the minimum and maximum species required to sustain threshold values of ecosystem processes in the future.

  14. Dominance, Biomass and Extinction Resistance Determine the Consequences of Biodiversity Loss for Multiple Coastal Ecosystem Processes

    PubMed Central

    Davies, Thomas W.; Jenkins, Stuart R.; Kingham, Rachel; Kenworthy, Joseph; Hawkins, Stephen J.; Hiddink, Jan G.

    2011-01-01

    Key ecosystem processes such as carbon and nutrient cycling could be deteriorating as a result of biodiversity loss. However, currently we lack the ability to predict the consequences of realistic species loss on ecosystem processes. The aim of this study was to test whether species contributions to community biomass can be used as surrogate measures of their contribution to ecosystem processes. These were gross community productivity in a salt marsh plant assemblage and an intertidal macroalgae assemblage; community clearance of microalgae in sessile suspension feeding invertebrate assemblage; and nutrient uptake in an intertidal macroalgae assemblage. We conducted a series of biodiversity manipulations that represented realistic species extinction sequences in each of the three contrasting assemblages. Species were removed in a subtractive fashion so that biomass was allowed to vary with each species removal, and key ecosystem processes were measured at each stage of community disassembly. The functional contribution of species was directly proportional to their contribution to community biomass in a 1∶1 ratio, a relationship that was consistent across three contrasting marine ecosystems and three ecosystem processes. This suggests that the biomass contributed by a species to an assemblage can be used to approximately predict the proportional decline in an ecosystem process when that species is lost. Such predictions represent “worst case scenarios” because, over time, extinction resilient species can offset the loss of biomass associated with the extinction of competitors. We also modelled a “best case scenario” that accounts for compensatory responses by the extant species with the highest per capita contribution to ecosystem processes. These worst and best case scenarios could be used to predict the minimum and maximum species required to sustain threshold values of ecosystem processes in the future. PMID:22163297

  15. Formal language constrained path problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less

  16. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  17. Towards viable drinking water services.

    PubMed

    Hukka, J J; Katko, T S

    1997-01-01

    This article offers a framework for developing viable drinking water services and institutional development in developing countries. The framework evolved from the authors' research and field experience in transition and developing economies. Viability is related to operative technology, appropriate organizations, and adequate cost recovery within the context of water resources, human and economic resources, sociocultural conditions, and other constraints. The ability of institutions to solve the problems of coordination and production depends upon player motivation, the complexity of the environment, and the ability of the players to control the environment. Third party enforcement of agreements are essential to reduce gains from opportunism, cheating, and shirking. Empirical research finds that per capita water production costs are 4 times higher in centralized systems and lowest in decentralized systems with coordination from a central party. Three-tiered systems of governments, regulators, and service providers are recommended. Management options must be consumer driven. The worst case scenario is consumer's reliance on vending and reselling with no alternative source of supply. Policies should have a strong focus on institutional reforms in the water sector, the development of a consumer driven water sector, facilitation of appropriate private-public partnerships, sound management of existing capital assets, a system for building viability into national strategies for the water sector, and financially self-sufficient and consumer responsible water supply organizations.

  18. Marking emergency exits and evacuation routes with sound beacons utilizing the precedence effect

    NASA Astrophysics Data System (ADS)

    van Wijngaarden, Sander J.; Bronkhorst, Adelbert W.; Boer, Louis C.

    2004-05-01

    Sound beacons can be extremely useful during emergency evacuations, especially when vision is obscured by smoke. When exits are marked with suitable sound sources, people can find these using only their capacity for directional hearing. Unfortunately, unless very explicit instructions were given, sound beacons currently commercially available (based on modulated noise) led to disappointing results during an evacuation experiment in a traffic tunnel. Only 19% out of 65 subjects were able to find an exit by ear. A signal designed to be more self-explanatory and less hostile-sounding (alternating chime signal and spoken message ``exit here'') increased the success rate to 86%. In a more complex environment-a mock-up of a ship's interior-routes to the exit were marked using multiple beacons. By applying carefully designed time delays between successive beacons, the direction of the route was marked, utilizing the precedence effect. Out of 34 subjects, 71% correctly followed the evacuation route by ear (compared to 24% for a noise signal as used in commercially available beacons). Even when subjects were forced to make a worst-case left-right decision at a T-junction, between two beacons differing only in arrival of the first wave front, 77% made the right decision.

  19. Voltage scheduling for low power/energy

    NASA Astrophysics Data System (ADS)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.

  20. Zika virus in French Polynesia 2013-14: anatomy of a completed outbreak.

    PubMed

    Musso, Didier; Bossin, Hervé; Mallet, Henri Pierre; Besnard, Marianne; Broult, Julien; Baudouin, Laure; Levi, José Eduardo; Sabino, Ester C; Ghawche, Frederic; Lanteri, Marion C; Baud, David

    2018-05-01

    The Zika virus crisis exemplified the risk associated with emerging pathogens and was a reminder that preparedness for the worst-case scenario, although challenging, is needed. Herein, we review all data reported during the unexpected emergence of Zika virus in French Polynesia in late 2013. We focus on the new findings reported during this outbreak, especially the first description of severe neurological complications in adults and the retrospective description of CNS malformations in neonates, the isolation of Zika virus in semen, the potential for blood-transfusion transmission, mother-to-child transmission, and the development of new diagnostic assays. We describe the effect of this outbreak on health systems, the implementation of vector-borne control strategies, and the line of communication used to alert the international community of the new risk associated with Zika virus. This outbreak highlighted the need for careful monitoring of all unexpected events that occur during an emergence, to implement surveillance and research programmes in parallel to management of cases, and to be prepared to the worst-case scenario. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Network immunization under limited budget using graph spectra

    NASA Astrophysics Data System (ADS)

    Zahedi, R.; Khansari, M.

    2016-03-01

    In this paper, we propose a new algorithm that minimizes the worst expected growth of an epidemic by reducing the size of the largest connected component (LCC) of the underlying contact network. The proposed algorithm is applicable to any level of available resources and, despite the greedy approaches of most immunization strategies, selects nodes simultaneously. In each iteration, the proposed method partitions the LCC into two groups. These are the best candidates for communities in that component, and the available resources are sufficient to separate them. Using Laplacian spectral partitioning, the proposed method performs community detection inference with a time complexity that rivals that of the best previous methods. Experiments show that our method outperforms targeted immunization approaches in both real and synthetic networks.

  2. Probability Quantization for Multiplication-Free Binary Arithmetic Coding

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.

  3. Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.

    DOT National Transportation Integrated Search

    2013-06-01

    Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...

  4. Global climate change: The quantifiable sustainability challenge

    EPA Science Inventory

    Population growth and the pressures spawned by increasing demands for energy and resource-intensive goods, foods and services are driving unsustainable growth in greenhouse gas (GHG) emissions. Recent GHG emission trends are consistent with worst-case scenarios of the previous de...

  5. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  6. Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory

    NASA Astrophysics Data System (ADS)

    Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo

    2017-04-01

    In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex (smooth) objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the application of the framework to some resource allocation problems in communication networks. In particular, we consider two non-trivial case-study applications, namely: (generalizations of) i) the rate profile maximization in MIMO interference broadcast networks; and the ii) the max-min fair multicast multigroup beamforming problem in a multi-cell environment. We develop a new class of algorithms enjoying the following distinctive features: i) they are \\emph{distributed} across the base stations (with limited signaling) and lead to subproblems whose solutions are computable in closed form; and ii) differently from current relaxation-based schemes (e.g., semidefinite relaxation), they are proved to always converge to d-stationary solutions of the aforementioned class of nonconvex problems. Numerical results show that the proposed (distributed) schemes achieve larger worst-case rates (resp. signal-to-noise interference ratios) than state-of-the-art centralized ones while having comparable computational complexity.

  7. Characterization methodology for lead zirconate titanate thin films with interdigitated electrode structures

    NASA Astrophysics Data System (ADS)

    Nigon, R.; Raeder, T. M.; Muralt, P.

    2017-05-01

    The accurate evaluation of ferroelectric thin films operated with interdigitated electrodes is quite a complex task. In this article, we show how to correct the electric field and the capacitance in order to obtain identical polarization and CV loops for all geometrical variants. The simplest model is compared with corrections derived from Schwartz-Christoffel transformations, and with finite element simulations. The correction procedure is experimentally verified, giving almost identical curves for a variety of gaps and electrode widths. It is shown that the measured polarization change corresponds to the average polarization change in the center plane between the electrode fingers, thus at the position where the electric field is most homogeneous with respect to the direction and size. The question of maximal achievable polarization in the various possible textures, and compositional types of polycrystalline lead zirconate titanate thin films is revisited. In the best case, a soft (110) textured thin film with the morphotropic phase boundary composition should yield a value of 0.95Ps, and in the worst case, a rhombohedral (100) textured thin film should deliver a polarization of 0.74Ps.

  8. Fatigue degradation and electric recovery in Silicon solar cells embedded in photovoltaic modules

    PubMed Central

    Paggi, Marco; Berardone, Irene; Infuso, Andrea; Corrado, Mauro

    2014-01-01

    Cracking in Silicon solar cells is an important factor for the electrical power-loss of photovoltaic modules. Simple geometrical criteria identifying the amount of inactive cell areas depending on the position of cracks with respect to the main electric conductors have been proposed in the literature to predict worst case scenarios. Here we present an experimental study based on the electroluminescence (EL) technique showing that crack propagation in monocrystalline Silicon cells embedded in photovoltaic (PV) modules is a much more complex phenomenon. In spite of the very brittle nature of Silicon, due to the action of the encapsulating polymer and residual thermo-elastic stresses, cracked regions can recover the electric conductivity during mechanical unloading due to crack closure. During cyclic bending, fatigue degradation is reported. This pinpoints the importance of reducing cyclic stresses caused by vibrations due to transportation and use, in order to limit the effect of cracking in Silicon cells. PMID:24675974

  9. Cultivating engineering ethics and critical thinking: a systematic and cross-cultural education approach using problem-based learning

    NASA Astrophysics Data System (ADS)

    Chang, Pei-Fen; Wang, Dau-Chung

    2011-08-01

    In May 2008, the worst earthquake in more than three decades struck southwest China, killing more than 80,000 people. The complexity of this earthquake makes it an ideal case study to clarify the intertwined issues of ethics in engineering and to help cultivate critical thinking skills. This paper first explores the need to encourage engineering ethics within a cross-cultural context. Next, it presents a systematic model for designing an engineering ethics curriculum based on moral development theory and ethic dilemma analysis. Quantitative and qualitative data from students' oral and written work were collected and analysed to determine directions for improvement. The paper also presents results of an assessment of this interdisciplinary engineering ethics course. This investigation of a disaster is limited strictly to engineering ethics education; it is not intended to assign blame, but rather to spark debate about ethical issues.

  10. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halligan, Matthew

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less

  11. In vitro experimental investigation of voice production

    PubMed Central

    Horáčcek, Jaromír; Brücker, Christoph; Becker, Stefan

    2012-01-01

    The process of human phonation involves a complex interaction between the physical domains of structural dynamics, fluid flow, and acoustic sound production and radiation. Given the high degree of nonlinearity of these processes, even small anatomical or physiological disturbances can significantly affect the voice signal. In the worst cases, patients can lose their voice and hence the normal mode of speech communication. To improve medical therapies and surgical techniques it is very important to understand better the physics of the human phonation process. Due to the limited experimental access to the human larynx, alternative strategies, including artificial vocal folds, have been developed. The following review gives an overview of experimental investigations of artificial vocal folds within the last 30 years. The models are sorted into three groups: static models, externally driven models, and self-oscillating models. The focus is on the different models of the human vocal folds and on the ways in which they have been applied. PMID:23181007

  12. The effect of solar array degradation on electric propulsion spacecraft performance.

    NASA Technical Reports Server (NTRS)

    Sauer, C. G., Jr.; Bourke, R. D.

    1972-01-01

    Current estimates of solar-electric-propulsion spacecraft performance are based upon a solar-array output power which is degraded by approximately 10-13% to account for possible losses caused by proton, electron and micrometeorite damage. Past studies have used a worst case analysis in which the maximum degradation was taken to occur at the beginning of the mission. This paper presents a comparison of mission studies using a hypothetical exponential decrease in power with time, with those using a sudden degradation of solar power. These comparisons indicate that the performance gain by using a time varying degradation during the mission is quite small for outbound missions in the solar system. In addition an indication of the power allocation strategy to be followed during a mission is presented.

  13. Offshore oil production not significant polluter, says government report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danenberger, E.P.

    1977-11-01

    Only 0.0028% of the oil produced in the Gulf of Mexico from 1971 through 1975 was spilled. World-wide, natural seeps introduce nearly 7 times more oil into the sea than offshore activity, while transportation, the worst offender, puts in 25 times more than offshore oil. The report includes data for spills of 50 bbl or less; about 85.5% of the total spill volume was from 5 of the 5857 incidents. In only one case was environmental damage reported, when minor amounts of oil reached 1000 ft of beach on the Chandeleur Islands after the 9/9/74 Cobia pipeline break. The reportmore » states that 50 ppm discharges cause no adverse effect, and that hydrocarbons in this concentration may even benefit microbial sea life.« less

  14. Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget

    NASA Astrophysics Data System (ADS)

    Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong

    To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.

  15. PWFQ: a priority-based weighted fair queueing algorithm for the downstream transmission of EPON

    NASA Astrophysics Data System (ADS)

    Xu, Sunjuan; Ye, Jiajun; Zou, Junni

    2005-11-01

    In the downstream direction of EPON, all ethernet frames share one downlink channel from the OLT to destination ONUs. To guarantee differentiated services, a scheduling algorithm is needed to solve the link-sharing issue. In this paper, we first review the classical WFQ algorithm and point out the shortcomings existing in the fair queueing principle of WFQ algorithm for EPON. Then we propose a novel scheduling algorithm called Priority-based WFQ (PWFQ) algorithm which distributes bandwidth based on priority. PWFQ algorithm can guarantee the quality of real-time services whether under light load or under heavy load. Simulation results also show that PWFQ algorithm not only can improve delay performance of real-time services, but can also meet the worst-case delay bound requirements.

  16. Geochemical modelling of worst-case leakage scenarios at potential CO2-storage sites - CO2 and saline water contamination of drinking water aquifers

    NASA Astrophysics Data System (ADS)

    Szabó, Zsuzsanna; Edit Gál, Nóra; Kun, Éva; Szőcs, Teodóra; Falus, György

    2017-04-01

    Carbon Capture and Storage is a transitional technology to reduce greenhouse gas emissions and to mitigate climate change. Following the implementation and enforcement of the 2009/31/EC Directive in the Hungarian legislation, the Geological and Geophysical Institute of Hungary is required to evaluate the potential CO2 geological storage structures of the country. Basic assessment of these saline water formations has been already performed and the present goal is to extend the studies to the whole of the storage complex and consider the protection of fresh water aquifers of the neighbouring area even in unlikely scenarios when CO2 injection has a much more regional effect than planned. In this work, worst-case scenarios are modelled to understand the effects of CO2 or saline water leaks into drinking water aquifers. The dissolution of CO2 may significantly change the pH of fresh water which induces mineral dissolution and precipitation in the aquifer and therefore, changes in solution composition and even rock porosity. Mobilization of heavy metals may also be of concern. Brine migration from CO2 reservoir and replacement of fresh water in the shallower aquifer may happen due to pressure increase as a consequence of CO2 injection. The saline water causes changes in solution composition which may also induce mineral reactions. The modelling of the above scenarios has happened at several methodological levels such as equilibrium batch, kinetic batch and kinetic reactive transport simulations. All of these have been performed by PHREEQC using the PHREEQC.DAT thermodynamic database. Kinetic models use equations and kinetic rate parameters from the USGS report of Palandri and Kharaka (2004). Reactive transport modelling also considers estimated fluid flow and dispersivity of the studied formation. Further input parameters are the rock and the original ground water compositions of the aquifers and a range of gas-phase CO2 or brine replacement ratios. Worst-case scenarios at seven potential CO2-storage areas have been modelled. The visualization of results has been automatized by R programming. The three types of models (equilibrium, kinetic batch and reactive transport) provide different type but overlapping information. All modelling output of both scenarios (CO2/brine) indicate the increase of ion-concentrations in the fresh water, which might exceed drinking water limit values. Transport models provide a possibility to identify the most suitable chemical parameter in the fresh water for leakage monitoring. This indicator parameter may show detectable and early changes even far away from the contamination source. In the CO2 models potassium concentration increase is significant and runs ahead of the other parameters. In the rock, the models indicate feldspar, montmorillonite, dolomite and illite dissolution whereas calcite, chlorite, kaolinite and silica precipitates, and in the case of CO2-inflow models, dawsonite traps a part of the leaking gas.

  17. Model Checking - My 27-Year Quest to Overcome the State Explosion Problem

    NASA Technical Reports Server (NTRS)

    Clarke, Ed

    2009-01-01

    Model Checking is an automatic verification technique for state-transition systems that are finite=state or that have finite-state abstractions. In the early 1980 s in a series of joint papers with my graduate students E.A. Emerson and A.P. Sistla, we proposed that Model Checking could be used for verifying concurrent systems and gave algorithms for this purpose. At roughly the same time, Joseph Sifakis and his student J.P. Queille at the University of Grenoble independently developed a similar technique. Model Checking has been used successfully to reason about computer hardware and communication protocols and is beginning to be used for verifying computer software. Specifications are written in temporal logic, which is particularly valuable for expressing concurrency properties. An intelligent, exhaustive search is used to determine if the specification is true or not. If the specification is not true, the Model Checker will produce a counterexample execution trace that shows why the specification does not hold. This feature is extremely useful for finding obscure errors in complex systems. The main disadvantage of Model Checking is the state-explosion problem, which can occur if the system under verification has many processes or complex data structures. Although the state-explosion problem is inevitable in worst case, over the past 27 years considerable progress has been made on the problem for certain classes of state-transition systems that occur often in practice. In this talk, I will describe what Model Checking is, how it works, and the main techniques that have been developed for combating the state explosion problem.

  18. Programmable Logic Application Notes

    NASA Technical Reports Server (NTRS)

    Katz, Richard

    2000-01-01

    This column will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will start a series of notes concentrating on analysis techniques with this issues section discussing worst-case analysis requirements.

  19. Amelioration of cognitive impairment and changes in microtubule-associated protein 2 after transient global cerebral ischemia are influenced by complex environment experience.

    PubMed

    Briones, Teresita L; Woods, Julie; Wadowska, Magdalena; Rogozinska, Magdalena

    2006-04-03

    In this study we examined whether expression of microtubule-associated protein 2 (MAP2) after transient global cerebral ischemia can be influenced by behavioral experience and if the changes are associated with functional improvement. Rats received either ischemia or sham surgery then assigned to: complex environment housing (EC) or social housing (SC) as controls for 14 days followed by water maze testing. Upregulation of MAP2 was seen in all ischemic animals with a significant overall increase evident in the EC housed rats. Behaviorally, all animals learned to perform the water maze task over time but the ischemia SC rats had the worst performance overall while all the EC housed animals demonstrated the best performance in general. Regression analysis showed that increase MAP2 expression was able to explain some of the variance in the behavioral parameters in the water maze suggesting that this cytoskeletal protein probably played a role in mediating enhanced functional outcomes.

  20. Approximate matching of structured motifs in DNA sequences.

    PubMed

    El-Mabrouk, Nadia; Raffinot, Mathieu; Duchesne, Jean-Eudes; Lajoie, Mathieu; Luc, Nicolas

    2005-04-01

    Several methods have been developed for identifying more or less complex RNA structures in a genome. All these methods are based on the search for conserved primary and secondary sub-structures. In this paper, we present a simple formal representation of a helix, which is a combination of sequence and folding constraints, as a constrained regular expression. This representation allows us to develop a well-founded algorithm that searches for all approximate matches of a helix in a genome. The algorithm is based on an alignment graph constructed from several copies of a pushdown automaton, arranged one on top of another. This is a first attempt to take advantage of the possibilities of pushdown automata in the context of approximate matching. The worst time complexity is O(krpn), where k is the error threshold, n the size of the genome, p the size of the secondary expression, and r its number of union symbols. We then extend the algorithm to search for pseudo-knots and secondary structures containing an arbitrary number of helices.

  1. Analysis of Separation Corridors for Visiting Vehicles from the International Space Station

    NASA Technical Reports Server (NTRS)

    Zaczek, Mariusz P.; Schrock, Rita R.; Schrock, Mark B.; Lowman, Bryan C.

    2011-01-01

    The International Space Station (ISS) is a very dynamic vehicle with many operational constraints that affect its performance, operations, and vehicle lifetime. Most constraints are designed to alleviate various safety concerns that are a result of dynamic activities between the ISS and various Visiting Vehicles (VVs). One such constraint that has been in place for Russian Vehicle (RV) operations is the limitation placed on Solar Array (SA) positioning in order to prevent collisions during separation and subsequent relative motion of VVs. An unintended consequence of the SA constraint has been the impacts to the operational flexibility of the ISS resulting from the reduced power generation capability as well as from a reduction in the operational lifetime of various SA components. The purpose of this paper is to discuss the technique and the analysis that were applied in order to relax the SA constraints for RV undockings, thereby improving both the ISS operational flexibility and extending its lifetime for many years to come. This analysis focused on the effects of the dynamic motion that occur both prior to and following RV separations. The analysis involved a parametric approach in the conservative application of various initial conditions and assumptions. These included the use of the worst case minimum and maximum vehicle configurations, worst case initial attitudes and attitude rates, and the worst case docking port separation dynamics. Separations were calculated for multiple ISS docking ports, at varied deviations from the nominal undocking attitudes and included the use of two separate attitude control schemes: continuous free-drift and a post separation attitude hold. The analysis required numerical propagation of both the separation motion and the vehicle attitudes using 3-degree-of-freedom (DOF) relative motion equations coupled with rigid body rotational dynamics to generate a large set of separation trajectories.

  2. A structured framework for assessing sensitivity to missing data assumptions in longitudinal clinical trials.

    PubMed

    Mallinckrodt, C H; Lin, Q; Molenberghs, M

    2013-01-01

    The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Reliability evaluation of high-performance, low-power FinFET standard cells based on mixed RBB/FBB technique

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole

    2017-04-01

    With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).

  4. Indoor exposure to toluene from printed matter matters: complementary views from life cycle assessment and risk assessment.

    PubMed

    Walser, Tobias; Juraske, Ronnie; Demou, Evangelia; Hellweg, Stefanie

    2014-01-01

    A pronounced presence of toluene from rotogravure printed matter has been frequently observed indoors. However, its consequences to human health in the life cycle of magazines are poorly known. Therefore, we quantified human-health risks in indoor environments with Risk Assessment (RA) and impacts relative to the total impact of toxic releases occurring in the life cycle of a magazine with Life Cycle Assessment (LCA). We used a one-box indoor model to estimate toluene concentrations in printing facilities, newsstands, and residences in a best, average, and worst-case scenario. The modeled concentrations are in the range of the values measured in on-site campaigns. Toluene concentrations can be close or even surpass the occupational legal thresholds in printing facilities in realistic worst-case scenarios. The concentrations in homes can surpass the US EPA reference dose (69 μg/kg/day) in worst-case scenarios, but are still at least 1 order of magnitude lower than in press rooms or newsstands. However, toluene inhaled at home becomes the dominant contribution to the total potential human toxicity impacts of toluene from printed matter when assessed with LCA, using the USEtox method complemented with indoor characterization factors for toluene. The significant contribution (44%) of toluene exposure in production, retail, and use in households, to the total life cycle impact of a magazine in the category of human toxicity, demonstrates that the indoor compartment requires particular attention in LCA. While RA works with threshold levels, LCA assumes that every toxic emission causes an incremental change to the total impact. Here, the combination of the two paradigms provides valuable information on the life cycle stages of printed matter.

  5. Level II scour analysis for Bridge 21 (MIDBTH00230021) on Town Highway 23, crossing the Middlebury River, Middlebury, Vermont

    USGS Publications Warehouse

    Boehmler, Erick M.; Degnan, James R.

    1997-01-01

    year discharges. In addition, the incipient roadway-overtopping discharge is determined and analyzed as another potential worst-case scour scenario. Total scour at a highway crossing is comprised of three components: 1) long-term streambed degradation; 2) contraction scour (due to accelerated flow caused by a reduction in flow area at a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute depths for contraction and local scour and a summary of the results of these computations follows. Contraction scour for all modelled flows ranged from 1.2 to 1.8 feet. The worst-case contraction scour occurred at the incipient overtopping discharge, which is less than the 500-year discharge. Abutment scour ranged from 17.7 to 23.7 feet. The worst-case abutment scour occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  6. Performance enhancement of various real-time image processing techniques via speculative execution

    NASA Astrophysics Data System (ADS)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  7. The interference of electronic implants in low frequency electromagnetic fields.

    PubMed

    Silny, J

    2003-04-01

    Electronic implants such as cardiac pacemakers or nerve stimulators can be impaired in different ways by amplitude-modulated and even continuous electric or magnetic fields of strong field intensities. For the implant bearer, possible consequences of a temporary electromagnetic interference may range from a harmless impairment of his well-being to a perilous predicament. Electromagnetic interferences in all types of implants cannot be covered here due to their various locations in the body and their different sensing systems. Therefore, this presentation focuses exemplarily on the most frequently used implant, the cardiac pacemaker. In case of an electromagnetic interference the cardiac pacemaker reacts by switching to inhibition mode or to fast asynchronous pacing. At a higher disturbance voltage on the input of the pacemaker, a regular asynchronous pacing is likely to arise. In particular, the first-named interference could be highly dangerous for the pacemaker patient. The interference threshold of cardiac pacemakers depends in a complex way on a number of different factors such as: electromagnetic immunity and adjustment of the pacemaker, the composition of the applied low-frequency fields (only electric or magnetic fields or combinations of both), their frequencies and modulations, the type of pacemaker system (bipolar, unipolar) and its location in the body, as well as the body size and orientation in the field, and last but not least, certain physiological conditions of the patient (e.g. inhalation, exhalation). In extensive laboratory studies we have investigated the interference mechanisms in more than 100 cardiac pacemakers (older types as well as current models) and the resulting worst-case conditions for pacemaker patients in low-frequency electric and magnetic fields. The verification of these results in different practical everyday-life situations, e.g. in the fields of high-voltage overhead lines or those of electronic article surveillance systems is currently in progress. In case of the vertically-oriented electric 50 Hz fields preliminary results show that per 1 kV/m unimpaired electrical field strength (rms) an interference voltage of about 400 microVpp as worst-case could occur at the input of a unipolar ventricularly controlled, left-pectorally implanted cardiac pacemaker. Thus, already a field strength above ca. 5 kV/m could cause an interference with an implanted pacemaker. The magnetic fields induces an electric disturbance voltage at the input of the pacemaker. The body and the pacemaker system compose several induction loops, whose induced voltages rates add or subtract. The effective area of one representing inductive loop ranges from 100 to 221 cm2. For the unfavourable left-pectorally implantated and atrially-controlled pacemaker with a low interference threshold, the interference threshold ranges between 552 and 16 microT (rms) for magnetic fields at frequencies between 10 and 250 Hz. On this basis the occurrence of interferences with implanted pacemakers is possible in everyday-life situations. But experiments demonstrate a low probability of interference of cardiac pacemakers in practical situations. This apparent contradiction can be explained by a very small band of inhibition in most pacemakers and, in comparison with the worst-case, deviating conditions.

  8. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  9. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    NASA Astrophysics Data System (ADS)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  10. An adaptive large neighborhood search procedure applied to the dynamic patient admission scheduling problem.

    PubMed

    Lusby, Richard Martin; Schwierz, Martin; Range, Troels Martin; Larsen, Jesper

    2016-11-01

    The aim of this paper is to provide an improved method for solving the so-called dynamic patient admission scheduling (DPAS) problem. This is a complex scheduling problem that involves assigning a set of patients to hospital beds over a given time horizon in such a way that several quality measures reflecting patient comfort and treatment efficiency are maximized. Consideration must be given to uncertainty in the length of stays of patients as well as the possibility of emergency patients. We develop an adaptive large neighborhood search (ALNS) procedure to solve the problem. This procedure utilizes a Simulated Annealing framework. We thoroughly test the performance of the proposed ALNS approach on a set of 450 publicly available problem instances. A comparison with the current state-of-the-art indicates that the proposed methodology provides solutions that are of comparable quality for small and medium sized instances (up to 1000 patients); the two approaches provide solutions that differ in quality by approximately 1% on average. The ALNS procedure does, however, provide solutions in a much shorter time frame. On larger instances (between 1000-4000 patients) the improvement in solution quality by the ALNS procedure is substantial, approximately 3-14% on average, and as much as 22% on a single instance. The time taken to find such results is, however, in the worst case, a factor 12 longer on average than the time limit which is granted to the current state-of-the-art. The proposed ALNS procedure is an efficient and flexible method for solving the DPAS problem. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Intelligent and robust optimization frameworks for smart grids

    NASA Astrophysics Data System (ADS)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.

  12. An inference engine for embedded diagnostic systems

    NASA Technical Reports Server (NTRS)

    Fox, Barry R.; Brewster, Larry T.

    1987-01-01

    The implementation of an inference engine for embedded diagnostic systems is described. The system consists of two distinct parts. The first is an off-line compiler which accepts a propositional logical statement of the relationship between facts and conclusions and produces data structures required by the on-line inference engine. The second part consists of the inference engine and interface routines which accept assertions of fact and return the conclusions which necessarily follow. Given a set of assertions, it will generate exactly the conclusions which logically follow. At the same time, it will detect any inconsistencies which may propagate from an inconsistent set of assertions or a poorly formulated set of rules. The memory requirements are fixed and the worst case execution times are bounded at compile time. The data structures and inference algorithms are very simple and well understood. The data structures and algorithms are described in detail. The system has been implemented on Lisp, Pascal, and Modula-2.

  13. Poison ivy - oak - sumac rash

    MedlinePlus

    ... reaction can vary from mild to severe. In rare cases, the person with the rash needs to be treated in the hospital. The worst symptoms are often seen during days 4 to 7 after coming in contact with the plant. The rash may last for 1 to 3 ...

  14. Closed Environment Module - Modularization and extension of the Virtual Habitat

    NASA Astrophysics Data System (ADS)

    Plötner, Peter; Czupalla, Markus; Zhukov, Anton

    2013-12-01

    The Virtual Habitat (V-HAB), is a Life Support System (LSS) simulation, created to perform dynamic simulation of LSS's for future human spaceflight missions. It allows the testing of LSS robustness by means of computer simulations, e.g. of worst case scenarios.

  15. 49 CFR 238.431 - Brake system.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow... a brake rate consistent with prevailing adhesion, passenger safety, and brake system thermal... adhesion control system designed to automatically adjust the braking force on each wheel to prevent sliding...

  16. 40 CFR 300.135 - Response operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY... discharge is a worst case discharge as discussed in § 300.324; the pathways to human and environmental exposure; the potential impact on human health, welfare, and safety and the environment; whether the...

  17. Management of reliability and maintainability; a disciplined approach to fleet readiness

    NASA Technical Reports Server (NTRS)

    Willoughby, W. J., Jr.

    1981-01-01

    Material acquisition fundamentals were reviewed and include: mission profile definition, stress analysis, derating criteria, circuit reliability, failure modes, and worst case analysis. Military system reliability was examined with emphasis on the sparing of equipment. The Navy's organizational strategy for 1980 is presented.

  18. Empirical Modeling Of Single-Event Upset

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Smith, Lawrence S.; Soli, George A.; Thieberger, Peter; Smith, Stephen L.; Atwood, Gregory E.

    1988-01-01

    Experimental study presents examples of empirical modeling of single-event upset in negatively-doped-source/drain metal-oxide-semiconductor static random-access memory cells. Data supports adoption of simplified worst-case model in which cross sectionof SEU by ion above threshold energy equals area of memory cell.

  19. A General Safety Assessment for Purified Food Ingredients Derived From Biotechnology Crops: Case Study of Brazilian Sugar and Beverages Produced From Insect-Protected Sugarcane.

    PubMed

    Kennedy, Reese D; Cheavegatti-Gianotto, Adriana; de Oliveira, Wladecir S; Lirette, Ronald P; Hjelle, Jerry J

    2018-01-01

    Insect-protected sugarcane that expresses Cry1Ab has been developed in Brazil. Analysis of trade information has shown that effectively all the sugarcane-derived Brazilian exports are raw or refined sugar and ethanol. The fact that raw and refined sugar are highly purified food ingredients, with no detectable transgenic protein, provides an interesting case study of a generalized safety assessment approach. In this study, both the theoretical protein intakes and safety assessments of Cry1Ab, Cry1Ac, NPTII, and Bar proteins used in insect-protected biotechnology crops were examined. The potential consumption of these proteins was examined using local market research data of average added sugar intakes in eight diverse and representative Brazilian raw and refined sugar export markets (Brazil, Canada, China, Indonesia, India, Japan, Russia, and the USA). The average sugar intakes, which ranged from 5.1 g of added sugar/person/day (India) to 126 g sugar/p/day (USA) were used to calculated possible human exposure. The theoretical protein intake estimates were carried out in the "Worst-case" scenario, assumed that 1 μg of newly-expressed protein is detected/g of raw or refined sugar; and the "Reasonable-case" scenario assumed 1 ng protein/g sugar. The "Worst-case" scenario was based on results of detailed studies of sugarcane processing in Brazil that showed that refined sugar contains less than 1 μg of total plant protein /g refined sugar. The "Reasonable-case" scenario was based on assumption that the expression levels in stalk of newly-expressed proteins were less than 0.1% of total stalk protein. Using these calculated protein intake values from the consumption of sugar, along with the accepted NOAEL levels of the four representative proteins we concluded that safety margins for the "Worst-case" scenario ranged from 6.9 × 10 5 to 5.9 × 10 7 and for the "Reasonable-case" scenario ranged from 6.9 × 10 8 to 5.9 × 10 10 . These safety margins are very high due to the extremely low possible exposures and the high NOAELs for these non-toxic proteins. This generalized approach to the safety assessment of highly purified food ingredients like sugar illustrates that sugar processed from Brazilian GM varieties are safe for consumption in representative markets globally.

  20. Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction

    NASA Technical Reports Server (NTRS)

    Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.

    2006-01-01

    The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.

  1. Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?

    NASA Astrophysics Data System (ADS)

    Ferrie, Chris; Blume-Kohout, Robin

    2011-03-01

    Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.

  2. Derivation and experimental verification of clock synchronization theory

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1994-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.

  3. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  4. Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1994-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.

  5. Correct consideration of the index of refraction using blackbody radiation.

    PubMed

    Hartmann, Jurgen

    2006-09-04

    The correct consideration of the index of refraction when using blackbody radiators as standard sources for optical radiation is derived and discussed. It is shown that simply using the index of refraction of air at laboratory conditions is not sufficient. A combination of the index of refraction of the media used inside the blackbody radiator and for the optical path between blackbody and detector has to be used instead. A worst case approximation for the introduced error when neglecting these effects is presented, showing that the error is below 0.1 % for wavelengths above 200 nm. Nevertheless, for the determination of the spectral radiance for the purpose of radiation temperature measurements the correct consideration of the refractive index is mandatory. The worst case estimation reveals that the introduced error in temperature at a blackbody temperature of 3000 degrees C can be as high as 400 mk at a wavelength of 650 nm and even higher at longer wavelengths.

  6. Thermal-hydraulic analysis of N Reactor graphite and shield cooling system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, J.O.; Schmitt, B.E.

    1988-02-01

    A series of bounding (worst-case) calculations were performed using a detailed hydrodynamic RELAP5 model of the N Reactor graphite and shield cooling system (GSCS). These calculations were specifically aimed to answer issues raised by the Westinghouse Independent Safety Review (WISR) committee. These questions address the operability of the GSCS during a worst-case degraded-core accident that requires the GDCS to mitigate the consequences of the accident. An accident scenario previously developed was designed as the hydrogen-mitigation design-basis accident (HMDBA). Previous HMDBA heat transfer analysis,, using the TRUMP-BD code, was used to define the thermal boundary conditions that the GSDS may bemore » exposed to. These TRUMP/HMDBA analysis results were used to define the bounding operating conditions of the GSCS during the course of an HMDBA transient. Nominal and degraded GSCS scenarios were investigated using RELAP5 within or at the bounds of the HMDBA transient. 10 refs., 42 figs., 10 tabs.« less

  7. Level II scour analysis for Bridge 37, (BRNETH00740037) on Town Highway 74, crossing South Peacham Brook, Barnet, Vermont

    USGS Publications Warehouse

    Burns, Ronda L.; Severance, Timothy

    1997-01-01

    Contraction scour for all modelled flows ranged from 15.8 to 22.5 ft. The worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.7 to 11.1 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in Tables 1 and 2. A cross-section of the scour computed at the bridge is presented in Figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. Usually, computed scour depths are evaluated in combination with other information including (but not limited to) historical performance during flood events, the geomorphic stability assessment, existing scour protection measures, and the results of the hydraulic analyses. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein.

  8. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  9. A CMOS matrix for extracting MOSFET parameters before and after irradiation

    NASA Technical Reports Server (NTRS)

    Blaes, B. R.; Buehler, M. G.; Lin, Y.-S.; Hicks, K. A.

    1988-01-01

    An addressable matrix of 16 n- and 16 p-MOSFETs was designed to extract the dc MOSFET parameters for all dc gate bias conditions before and after irradiation. The matrix contains four sets of MOSFETs, each with four different geometries that can be biased independently. Thus the worst-case bias scenarios can be determined. The MOSFET matrix was fabricated at a silicon foundry using a radiation-soft CMOS p-well LOCOS process. Co-60 irradiation results for the n-MOSFETs showed a threshold-voltage shift of -3 mV/krad(Si), whereas the p-MOSFETs showed a shift of 21 mV/krad(Si). The worst-case threshold-voltage shift occurred for the n-MOSFETs, with a gate bias of 5 V during the anneal. For the p-MOSFETs, biasing did not affect the shift in the threshold voltage. A parasitic MOSFET dominated the leakage of the n-MOSFET biased with 5 V on the gate during irradiation. Co-60 test results for other parameters are also presented.

  10. Modelling the long-term evolution of worst-case Arctic oil spills.

    PubMed

    Blanken, Hauke; Tremblay, Louis Bruno; Gaskin, Susan; Slavin, Alexander

    2017-03-15

    We present worst-case assessments of contamination in sea ice and surface waters resulting from hypothetical well blowout oil spills at ten sites in the Arctic Ocean basin. Spill extents are estimated by considering Eulerian passive tracers in the surface ocean of the MITgcm (a hydrostatic, coupled ice-ocean model). Oil in sea ice, and contamination resulting from melting of oiled ice, is tracked using an offline Lagrangian scheme. Spills are initialized on November 1st 1980-2010 and tracked for one year. An average spill was transported 1100km and potentially affected 1.1 million km 2 . The direction and magnitude of simulated oil trajectories are consistent with known large-scale current and sea ice circulation patterns, and trajectories frequently cross international boundaries. The simulated trajectories of oil in sea ice match observed ice drift trajectories well. During the winter oil transport by drifting sea ice is more significant than transport with surface currents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Successful amblyopia therapy initiated after age 7 years: compliance cures.

    PubMed

    Mintz-Hittner, H A; Fernandez, K M

    2000-11-01

    To report successful therapy for anisometropic and strabismic amblyopia initiated after age 7 years. A consecutive series of 36 compliant children older than 7 years (range, 7.0 to 10.3 years; mean, 8.2 years) at initiation of amblyopia therapy for anisometropic (19 patients; mean age, 8.3 years), strabismic (9 patients; mean age, 8.0 years), or anisometropic and strabismic (8 patients; mean age, 8.0 years) amblyopia was studied. Initial (worst) visual acuities were between 20/50 and 20/400 (log geometric mean, -0.83 [antilog, 20/134] for all patients; -0.88 [antilog, 20/151] for anisometropic patients; -0.70 [antilog, 20/100] for strabismic patients; and -0.88 [antilog, 20/151] for anisometropic and strabismic patients). Initial (worst) binocularity was absent or reduced in all cases. Therapy consisted of (1) full-time standard occlusion (21 patients; mean age, 8.0 years), (2) total penalization (7 patients; mean age, 7.8 years), or (3) full-time occlusive contact lenses (8 patients; mean age, 8.8 years). Final (best) visual acuities were between 20/20 and 20/30 for all 36 patients. Final (best) binocularity was maintained or improved for 22 (61%) of 36 patients, including 16 anisometropic patients (84%), 2 strabismic patients (22%), and 4 anisometropic and strabismic patients (50%). Given compliance, therapy for anisometropic and strabismic amblyopia can be successful even if initiated after age 7 years. Arch Ophthalmol. 2000;118:1535-1541

  12. The effects of shift work and time of day on fine motor control during handwriting.

    PubMed

    Hölzle, Patricia; Hermsdörfer, Joachim; Vetter, Céline

    2014-01-01

    Handwriting is an elaborate and highly automatised skill relying on fine motor control. In laboratory conditions handwriting kinematics are modulated by the time of day. This study investigated handwriting kinematics in a rotational shift system and assessed whether similar time of day fluctuations at the work place can be observed. Handwriting performance was measured in two tasks of different levels of complexity in 34 shift workers across morning (6:00-14:00), evening (14:00-22:00) and night shifts (22:00-6:00). Participants were tested during all three shifts in 2-h intervals with mobile testing devices. We calculated average velocity, script size and writing frequency to quantify handwriting kinematics and fluency. Average velocity and script size were significantly affected by the shift work schedule with the worst performance during morning shifts and the best performance during evening shifts. Our data are of high economic relevance as fine motor skills are indispensable for accurate and effective production at the work place. Handwriting is one of the most complex fine motor skills in humans, which is frequently performed in daily life. In this study, we tested handwriting repeatedly at the work place in a rotational shift system. We found slower handwriting velocity and reduced script size during morning shifts.

  13. [Evaluation of the organization of health services as a strategy for the prevention and control of visceral leishmaniasis].

    PubMed

    Barbosa, Miriam Nogueira; Guimarães, Eliete Albano de Azevedo; Luz, Zélia Maria Profeta da

    2016-01-01

    to evaluate the organization of health services as a strategy for the prevention and control of visceral leishmaniasis (VL) in Ribeirão das Neves, Minas Gerais, Brazil, from 2010 to 2012. this was a case study evaluation of the degree of implementation of a strategy for the integration of health care services, control of zoonosis and epidemiological surveillance; it consisted of observing the work process, interviewing health professionals and analysing secondary data from information systems. implementation was partially adequate (84%); in terms of structure, the human resources component had the worst evaluation (64%) whilst in terms of work process, evaluation was 80% for reorganization of care and 77% for surveillance; in the period 2010-2012 there was a 20% increase in reported cases of VL and a 20% reduction in the time interval between reporting a case and starting treatment. the strategy contributed to the improvement of the organization of VL prevention and control actions.

  14. Equilibrium temperature in a clump of bacteria heated in fluid.

    PubMed Central

    Davey, K R

    1990-01-01

    A theoretical model was developed and used to estimate quantitatively the "worst case", i.e., the longest, time to reach equilibrium temperature in the center of a clump of bacteria heated in fluid. For clumps with 10 to 10(6) cells heated in vapor, such as dry and moist air, and liquid fluids such as purees and juices, predictions show that temperature equilibrium will occur with sterilization temperatures up to 130 degrees C in under 0.02 s. Model development highlighted that the controlling influence on time for heating up the clump is the surface convection thermal resistance and that the internal conduction resistance of the clump mass is negligible by comparison. The time for a clump to reach equilibrium sterilization temperature was therefore decreased with relative turbulence (velocity) of the heating fluid, such as occurs in many process operations. These results confirm widely held suppositions that the heat-up time of bacteria in vapor or liquid is not significant with usual sterilization times. PMID:2306095

  15. Effectiveness of the SYSTEM 1E Liquid Chemical Sterilant Processing System for reprocessing duodenoscopes.

    PubMed

    McDonnell, Gerald; Ehrman, Michele; Kiess, Sara

    2016-06-01

    A troubling number of health care-acquired infection outbreaks and transmission events, some involving highly resistant microbial pathogens and resulting in serious patient outcomes, have been traced to reusable, high-level disinfected duodenoscopes in the United States. The Food and Drug Administration (FDA) requested a study be conducted to verify liquid chemical sterilization efficacy of SYSTEM 1E(®) Liquid Chemical Sterilant Processing System (STERIS Corporation, Mentor, OH) with varied duodenoscope designs under especially arduous conditions. Here, we describe the system's performance under worst case SYSTEM 1E(®) processing conditions. The test protocol challenged the system's performance by running a fractional cycle to evaluate reduction of recoverable test spores from heavily contaminated endoscopes, including all channels and each distal tip, under worst case SYSTEM 1E(®) processing conditions. All devices were successfully liquid chemically sterilized, showing greater than a 6 log10 reduction of Geobacillus stearothermophilus spores at every inoculation site of each duodenoscope tested, in less than half the exposure time of the standard cycle. The successful outcome of the additional efficacy testing reported here indicates that the SYSTEM 1E(®) is an effective low-temperature liquid chemical sterilization method for duodenoscopes and other critical and semicritical devices. It offers a fast, safe, convenient processing alternative while providing the assurance of a system expressly tested and cleared to achieve liquid chemical sterilization of specific validated duodenoscope models. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  16. Investigation of the Human Response to Upper Torso Retraction with Weighted Helmets

    DTIC Science & Technology

    2013-09-01

    coverage of each test. The Kodak system is capable of recording high-speed motion up to a rate of 1000 frames per second. For this study , the video...the measured center-of-gravity (CG) of the worst- case test helmet fell outside the current limits and no injuries were observed, it can be stated...8 Figure 7. T-test Cases 1-9 (0 lb Added Helmet Weight

  17. When Food Is a Foe.

    ERIC Educational Resources Information Center

    Fitzgerald, Patricia L.

    1998-01-01

    Although only 5% of the population has severe food allergies, school business officials must be prepared for the worst-case scenario. Banning foods and segregating allergic children are harmful practices. Education and sensible behavior are the best medicine when food allergies and intolerances are involved. Resources are listed. (MLH)

  18. Shuttle ECLSS ammonia delivery capability

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The possible effects of excessive requirements on ammonia flow rates required for entry cooling, due to extreme temperatures, on mission plans for the space shuttles, were investigated. An analysis of worst case conditions was performed, and indicates that adequate flow rates are available. No mission impact is therefore anticipated.

  19. 41 CFR 102-80.145 - What is meant by “flashover”?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...

  20. 41 CFR 102-80.145 - What is meant by “flashover”?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...”? Flashover means fire conditions in a confined area where the upper gas layer temperature reaches 600 °C (1100 °F) and the heat flux at floor level exceeds 20 kW/m2 (1.8 Btu/ft2/sec). Reasonable Worst Case...

  1. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    PubMed Central

    Alguliyev, Rasim M.; Aliguliyev, Ramiz M.; Mahmudova, Rasmiyya S.

    2015-01-01

    Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM) model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method. PMID:26516634

  2. A tale of two California droughts: Lessons amidst record warmth and dryness in a region of complex physical and human geography

    NASA Astrophysics Data System (ADS)

    Swain, Daniel L.

    2015-11-01

    The state of California has experienced the worst drought in its historical record during 2012-2015. Adverse effects of this multiyear event have been far from uniformly distributed across the region, ranging from remarkably mild in most of California's densely populated coastal cities to very severe in more rural, agricultural, and wildfire-prone regions. This duality of impacts has created a tale of two very different California droughts—highlighting enhanced susceptibility to climate stresses at the environmental and socioeconomic margins of California. From a geophysical perspective, the persistence of related atmospheric anomalies has raised a number of questions regarding the drought's origins—including the role of anthropogenic climate change. Recent investigations underscore the importance of understanding the underlying physical causes of extremes in the climate system, and the present California drought represents an excellent case study for such endeavors. Meanwhile, a powerful El Niño event in the Pacific Ocean offers the simultaneous prospect of partial drought relief but also an increased risk of flooding during the 2015-2016 winter—a situation illustrative of the complex hydroclimatic risks California and other regions are likely to face in a warming world.

  3. Evaluating landfill aftercare strategies: A life cycle assessment approach.

    PubMed

    Turner, David A; Beaven, Richard P; Woodman, Nick D

    2017-05-01

    This study investigates the potential impacts caused by the loss of active environmental control measures during the aftercare period of landfill management. A combined mechanistic solute flow model and life cycle assessment (LCA) approach was used to evaluate the potential impacts of leachate emissions over a 10,000year time horizon. A continuum of control loss possibilities occurring at different times and for different durations were investigated for four different basic aftercare scenarios, including a typical aftercare scenario involving a low permeability cap and three accelerated aftercare scenarios involving higher initial infiltration rates. Assuming a 'best case' where control is never lost, the largest potential impacts resulted from the typical aftercare scenario. The maximum difference between potential impacts from the 'best case' and the 'worst case', where control fails at the earliest possible point and is never reinstated, was only a fourfold increase. This highlights potential deficiencies in standard life cycle impact assessment practice, which are discussed. Nevertheless, the results show how the influence of active control loss on the potential impacts of landfilling varies considerably depending on the aftercare strategy used and highlight the importance that leachate treatment efficiencies have upon impacts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Yemen in a Time of Cholera: Current Situation and Challenges.

    PubMed

    Al-Mekhlafi, Hesham M

    2018-03-19

    Since early 2015, Yemen has been in the throes of a grueling civil war, which has devastated the health system and public services, and created one of the world's worst humanitarian disasters. The country is currently facing a cholera epidemic the world's largest on record, surpassing one million (1,061,548) suspected cases, with 2,373 related deaths since October 2016. Cases were first confirmed in Sana'a city and then spread to almost all governorates except Socotra Island. Continued efforts are being made by the World Health Organization and international partners to contain the epidemic through improving water, sanitation and hygiene, setting up diarrhea treatment centers, and improving the population's awareness about the disease. The provision of clean water and adequate sanitation is imperative as an effective long-term solution to prevent the further spread of this epidemic. Cholera vaccination campaigns should also be conducted as a preventive measure.

  5. Decision making with epistemic uncertainty under safety constraints: An application to seismic design

    USGS Publications Warehouse

    Veneziano, D.; Agarwal, A.; Karaca, E.

    2009-01-01

    The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project. ?? 2009 Elsevier Ltd. All rights reserved.

  6. Worst case prediction of additives migration from polystyrene for food safety purposes: a model update.

    PubMed

    Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane

    2018-03-01

    A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.

  7. Review of Article, A Case for Submicrosecond Rise-Time Lightning Current Pulses for Use in Aircraft Induced-Coupling Studies, by D. W. Clifford, E. P. Krider and M. A. Uman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabayan, H.S.; Zicker, J.D.

    The amplitudes of currents due to lightning are considerably larger than NEMP induced currents both in the time and frequency domains. The more important quantity for aperture illumination is the rate of rise of the current. The analysis performed for this in this memorandum is unsatisfactory since the artificial double exponential model was used. Still, the lightning rate of rise is only twice as high as that due to NEMP even when the absolute worst (or presently known) lightning pulse is used. A much better way to do this comparison is to use an actual LEMP data and NEMP frommore » an actual weapon. Furthermore, because of lack of data, no electric field analysis was undertaken.« less

  8. Charge Transfer Inefficiency in Pinned Photodiode CMOS image sensors: Simple Montecarlo modeling and experimental measurement based on a pulsed storage-gate method

    NASA Astrophysics Data System (ADS)

    Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart

    2016-11-01

    The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.

  9. A framework for multi-stakeholder decision-making and ...

    EPA Pesticide Factsheets

    We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study

  10. 40 CFR 266.106 - Standards to control metals emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...

  11. 40 CFR 266.106 - Standards to control metals emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... HAZARDOUS WASTE MANAGEMENT FACILITIES Hazardous Waste Burned in Boilers and Industrial Furnaces § 266.106... implemented by limiting feed rates of the individual metals to levels during the trial burn (for new... screening limit for the worst-case stack. (d) Tier III and Adjusted Tier I site-specific risk assessment...

  12. 49 CFR 238.431 - Brake system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Brake system. 238.431 Section 238.431... Equipment § 238.431 Brake system. (a) A passenger train's brake system shall be capable of stopping the... train is operating under worst-case adhesion conditions. (b) The brake system shall be designed to allow...

  13. Assessment of the Incentives Created by Public Disclosure of Off-Site Consequence Analysis Information for Reduction in the Risk of Accidental Releases

    EPA Pesticide Factsheets

    The off-site consequence analysis (OCA) evaluates the potential for worst-case and alternative accidental release scenarios to harm the public and environment around the facility. Public disclosure would likely reduce the number/severity of incidents.

  14. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  15. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  16. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  17. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  18. 33 CFR 155.1230 - Response plan development and evaluation criteria.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VESSELS Response plan requirements for vessels carrying animal fats and vegetable oils as a primary cargo... carry animal fats or vegetable oils as a primary cargo must provide information in their plan that identifies— (1) Procedures and strategies for responding to a worst case discharge of animal fats or...

  19. Competitive Strategies and Financial Performance of Small Colleges

    ERIC Educational Resources Information Center

    Barron, Thomas A., Jr.

    2017-01-01

    Many institutions of higher education are facing significant financial challenges, resulting in diminished economic viability and, in the worst cases, the threat of closure (Moody's Investor Services, 2015). The study was designed to explore the effectiveness of competitive strategies for small colleges in terms of financial performance. Five…

  20. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  1. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  2. 40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...

  3. 30 CFR 254.21 - How must I format my response plan?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... divide your response plan for OCS facilities into the sections specified in paragraph (b) and explained in the other sections of this subpart. The plan must have an easily found marker identifying each.... (ii) Contractual agreements. (iii) Worst case discharge scenario. (iv) Dispersant use plan. (v) In...

  4. Safety in the Chemical Laboratory: Laboratory Air Quality: Part I. A Concentration Model.

    ERIC Educational Resources Information Center

    Butcher, Samuel S.; And Others

    1985-01-01

    Offers a simple model for estimating vapor concentrations in instructional laboratories. Three methods are described for measuring ventilation rates, and the results of measurements in six laboratories are presented. The model should provide a simple screening tool for evaluating worst-case personal exposures. (JN)

  5. A Didactic Analysis of Functional Queues

    ERIC Educational Resources Information Center

    Rinderknecht, Christian

    2011-01-01

    When first introduced to the analysis of algorithms, students are taught how to assess the best and worst cases, whereas the mean and amortized costs are considered advanced topics, usually saved for graduates. When presenting the latter, aggregate analysis is explained first because it is the most intuitive kind of amortized analysis, often…

  6. Genetically modified crops and aquatic ecosystems: considerations for environmental risk assessment and non-target organism testing.

    PubMed

    Carstens, Keri; Anderson, Jennifer; Bachman, Pamela; De Schrijver, Adinda; Dively, Galen; Federici, Brian; Hamer, Mick; Gielkens, Marco; Jensen, Peter; Lamp, William; Rauschen, Stefan; Ridley, Geoff; Romeis, Jörg; Waggoner, Annabel

    2012-08-01

    Environmental risk assessments (ERA) support regulatory decisions for the commercial cultivation of genetically modified (GM) crops. The ERA for terrestrial agroecosystems is well-developed, whereas guidance for ERA of GM crops in aquatic ecosystems is not as well-defined. The purpose of this document is to demonstrate how comprehensive problem formulation can be used to develop a conceptual model and to identify potential exposure pathways, using Bacillus thuringiensis (Bt) maize as a case study. Within problem formulation, the insecticidal trait, the crop, the receiving environment, and protection goals were characterized, and a conceptual model was developed to identify routes through which aquatic organisms may be exposed to insecticidal proteins in maize tissue. Following a tiered approach for exposure assessment, worst-case exposures were estimated using standardized models, and factors mitigating exposure were described. Based on exposure estimates, shredders were identified as the functional group most likely to be exposed to insecticidal proteins. However, even using worst-case assumptions, the exposure of shredders to Bt maize was low and studies supporting the current risk assessments were deemed adequate. Determining if early tier toxicity studies are necessary to inform the risk assessment for a specific GM crop should be done on a case by case basis, and should be guided by thorough problem formulation and exposure assessment. The processes used to develop the Bt maize case study are intended to serve as a model for performing risk assessments on future traits and crops.

  7. Maritime Security Cooperation in the Strait of Malacca

    DTIC Science & Technology

    2008-06-01

    Banlaoi, “Maritime Security Outlook for Southeast Asia,” The Best of Time, the Worst of Times, ed. Joshua Ho and Catherine Zara Raymond (Singapore...Times, edited by Joshua Ho and Catherine Zara Raymond. Singapore: World Scientific Publishing Co. Pte. Ltd., 2005. 74 Blanchard, Jean-Marc F

  8. Scattered UV irradiation during VISX excimer laser keratorefractive surgery.

    PubMed

    Hope, R J; Weber, E D; Bower, K S; Pasternak, J P; Sliney, D H

    2008-04-01

    To evaluate the potential occupational health hazards associated with scattered ultraviolet (UV) radiation during photorefractive keratectomy (PRK) using the VISX Star S3 excimer laser. The Laser Vision Center, National Naval Medical Center, Bethesda, Maryland, USA. Intraoperative radiometric measurements were made with the Ophir Power/Energy Meter (LaserStar Model PD-10 with silicon detector) during PRK treatments as well as during required calibration procedures at a distance of 20.3 cm from the left cornea. These measurements were evaluated using a worst-case scenario for exposure, and then compared with the American Conference of Governmental Industrial Hygeinists (ACGIH) Threshold Value Limits (TVL) to perform a risk/hazard analysis. During the PRK procedures, the highest measured value was 248.4 nJ/pulse. During the calibration procedures, the highest measured UV scattered radiation level was 149.6 nJ/pulse. The maximum treatment time was 52 seconds. Using a worst-case scenario in which all treatments used the maximum power and time, the total energy per eye treated was 0.132 mJ/cm2 and the total UV radiation at close range (80 cm from the treated eye) was 0.0085 mJ/cm2. With a workload of 20 patients, the total occupational exposure at 80 cm to actinic UV radiation in an 8-hour period would be 0.425 mJ/cm2. The scattered actinic UV laser radiation from the VISX Star S3 excimer laser did not exceed occupational exposure limits during a busy 8-hour workday, provided that operating room personnel were at least 80 cm from the treated eye. While the use of protective eyewear is always prudent, this study demonstrates that the trace amounts of scattered laser emissions produced by this laser do not pose a serious health risk even without the use of protective eyewear.

  9. The ball in play demands of international rugby union.

    PubMed

    Pollard, Benjamin T; Turner, Anthony N; Eager, Robin; Cunningham, Daniel J; Cook, Christian J; Hogben, Patrick; Kilduff, Liam P

    2018-03-03

    Rugby union is a high intensity intermittent sport, typically analysed via set time periods or rolling average methods. This study reports the demands of international rugby union via global positioning system (GPS) metrics expressed as mean ball in play (BiP), maximum BiP (max BiP), and whole match outputs. Single cohort cross sectional study involving 22 international players, categorised as forwards and backs. A total of 88 GPS files from eight international test matches were collected during 2016. An Opta sportscode timeline was integrated into the GPS software to split the data into BiP periods. Metres per min (mmin -1 ), high metabolic load per min (HML), accelerations per min (Acc), high speed running per min (HSR), and collisions per min (Coll) were expressed relative to BiP periods and over the whole match (>60min). Whole match metrics were significantly lower than all BiP metrics (p<0.001). Mean and max BiP HML, (p<0.01) and HSR (p<0.05) were significantly higher for backs versus forwards, whereas Coll were significantly higher for forwards (p<0.001). In plays lasting 61s or greater, max BiP mmin -1 were higher for backs. Max BiP mmin -1 , HML, HSR and Coll were all time dependant (p<0.05) showing that both movement metrics and collision demands differ as length of play continues. This study uses a novel method of accurately assessing the BiP demands of rugby union. It also reports typical and maximal demands of international rugby union that can be used by practitioners and scientists to target training of worst-case scenario's equivalent to international intensity. Backs covered greater distances at higher speeds and demonstrated higher HML, in general play as well as 'worst case scenarios'; conversely forwards perform a higher number of collisions. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  10. Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, Naresh; Baone, Chaitanya; Veda, Santosh

    2014-12-31

    Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less

  11. Transport and mixing of a volume of fluid in a complex geometry

    NASA Astrophysics Data System (ADS)

    Gavelli, Filippo

    This work presents the results of the experimental investigation of an entire sequence of events, leading to an unwanted injection of boron-depleted water into the core of a PWR. The study is subdivided into three tasks: the generation of a dilute volume in the primary system, its transport to the core, and the mixing encountered along the path. Experiments conducted at the University of Maryland (UM) facility show that, during a Small-Break LOCA transient, volumes of dilute coolant are segregated in the system, by means of phase-separating energy transport from the core to the steam generators (Boiler Condenser Mode). Two motion-initiating mechanisms are considered: the resumption of natural circulation during the recovery of the primary liquid inventory, and the reactor coolant pump startup under BCM conditions. During the inventory recovery, various phenomena are observed, that contribute to the mixing of the dilute volumes prior to the resumption of flow. The pump activation, instead, occurs in a stagnant system, therefore, no mixing of the unborated liquid has occurred. Since an unmixed slug has the potential for a larger reactivity excursion than a partially mixed one, the pump-initiated flow resumption represents the worst-case scenario. The impulse - response method is applied, for the first time, to the problem of mixing in the downcomer. This allows to express the mixing in terms of two parameters, the dispersion number and the residence time, characteristics of the flow distribution in the complex annular geometry. Other important results are obtained from the analysis of the experimental data with this procedure. It is shown that the turbulence generated by the pump impeller has a significant impact on the overall mixing. Also, the geometric discontinuities in the downcomer (in particular, the gap enlargement below the cold leg elevation) are shown to be the cause of vortex structures that highly enhance the mixing process.

  12. Tissue Engineering Initiative

    DTIC Science & Technology

    2000-08-01

    forefoot with the foot in the neutral position, and (b) similar to (a) but with heel landing. Although the authors reported no absolute strain values...diameter of sensors (or, in the case of a rectangular sensor, width as measured along pin axis). Worst case : Strike line from inside edges of sensors...potoroo it is just prior to "toe strike ". The locomotion of the potoroo is described as digitigrade, unlike humans, who walk in a plantigrade manner

  13. Space Based Intelligence, Surveillance, and Reconnaissance Contribution to Global Strike in 2035

    DTIC Science & Technology

    2012-02-15

    include using high altitude air platforms and airships as a short-term solution, and small satellites with an Operationally Responsive Space (ORS) launch...irreversible threats, along with a worst case scenario. Section IV provides greater detail of the high altitude air platform, airship , and commercial space...Resultantly, the U.S. could use high altitude air platforms, airships , and cyber to complement its space systems in case of denial, degradation, or

  14. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  15. Managing in a New Reality

    ERIC Educational Resources Information Center

    Goldstein, Philip J.

    2009-01-01

    The phrase "worst since the Great Depression" has seemingly punctuated every economic report. The United States is experiencing the worst housing market, the worst unemployment level, and the worst drop in gross domestic product since the Great Depression. Although the steady drumbeat of bad news may have made everyone nearly numb, one…

  16. Spike-frequency adaptation in the inferior colliculus.

    PubMed

    Ingham, Neil J; McAlpine, David

    2004-02-01

    We investigated spike-frequency adaptation of neurons sensitive to interaural phase disparities (IPDs) in the inferior colliculus (IC) of urethane-anesthetized guinea pigs using a stimulus paradigm designed to exclude the influence of adaptation below the level of binaural integration. The IPD-step stimulus consists of a binaural 3,000-ms tone, in which the first 1,000 ms is held at a neuron's least favorable ("worst") IPD, adapting out monaural components, before being stepped rapidly to a neuron's most favorable ("best") IPD for 300 ms. After some variable interval (1-1,000 ms), IPD is again stepped to the best IPD for 300 ms, before being returned to a neuron's worst IPD for the remainder of the stimulus. Exponential decay functions fitted to the response to best-IPD steps revealed an average adaptation time constant of 52.9 +/- 26.4 ms. Recovery from adaptation to best IPD steps showed an average time constant of 225.5 +/- 210.2 ms. Recovery time constants were not correlated with adaptation time constants. During the recovery period, adaptation to a 2nd best-IPD step followed similar kinetics to adaptation during the 1st best-IPD step. The mean adaptation time constant at stimulus onset (at worst IPD) was 34.8 +/- 19.7 ms, similar to the 38.4 +/- 22.1 ms recorded to contralateral stimulation alone. Individual time constants after stimulus onset were correlated with each other but not with time constants during the best-IPD step. We conclude that such binaurally derived measures of adaptation reflect processes that occur above the level of exclusively monaural pathways, and subsequent to the site of primary binaural interaction.

  17. "The Prognosis, Doctor?"

    ERIC Educational Resources Information Center

    Everhart, Nancy

    1998-01-01

    Updates a 1994 report on school library staffing, highlighting states with the best and worst student/librarian ratios, states requiring full-time certified library media specialists, states with site-based management, states replacing librarians with technology specialists. Lists states requiring full-time specialists for elementary,…

  18. A novel N-input voting algorithm for X-by-wire fault-tolerant systems.

    PubMed

    Karimi, Abbas; Zarafshan, Faraneh; Al-Haddad, S A R; Ramli, Abdul Rahman

    2014-01-01

    Voting is an important operation in multichannel computation paradigm and realization of ultrareliable and real-time control systems that arbitrates among the results of N redundant variants. These systems include N-modular redundant (NMR) hardware systems and diversely designed software systems based on N-version programming (NVP). Depending on the characteristics of the application and the type of selected voter, the voting algorithms can be implemented for either hardware or software systems. In this paper, a novel voting algorithm is introduced for real-time fault-tolerant control systems, appropriate for applications in which N is large. Then, its behavior has been software implemented in different scenarios of error-injection on the system inputs. The results of analyzed evaluations through plots and statistical computations have demonstrated that this novel algorithm does not have the limitations of some popular voting algorithms such as median and weighted; moreover, it is able to significantly increase the reliability and availability of the system in the best case to 2489.7% and 626.74%, respectively, and in the worst case to 3.84% and 1.55%, respectively.

  19. Maiden outbreaks of dengue virus 1 genotype III in rural central India.

    PubMed

    Barde, P V; Kori, B K; Shukla, M K; Bharti, P K; Chand, G; Kumar, G; Ukey, M J; Ali, N A; Singh, N

    2015-01-01

    Dengue is regarded as the most important arboviral disease. Although sporadic cases have been reported, serotypes responsible for outbreaks have not been identified from central India over the last 20 years. We investigated two outbreaks of febrile illness, in August and November 2012, from Korea district (Chhattisgarh) and Narsinghpur district (Madhya Pradesh), respectively. Fever and entomological surveys were conducted in the affected regions. Molecular and serological tests were conducted on collected serum samples. Dengue-specific amplicons were sequenced and phylogenetic analyses were performed. In Korea and Narsinghpur districts 37·3% and 59% of cases were positive, respectively, for dengue infection, with adults being the worst affected. RT-PCR confirmed dengue virus serotype 1 genotype III as the aetiology. Ninety-six percent of infections were primary. This is the first time that dengue virus 1 outbreaks have been documented from central India. Introduction of the virus into the population and a conducive mosquitogenic environment favouring increased vector density caused the outbreak. Timely diagnosis and strengthening vector control measures are essential to avoid future outbreaks.

  20. Information Visualization Techniques for Effective Cross-Discipline Communication

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2013-04-01

    Collaboration between research groups in different fields is a common occurrence, but it can often be frustrating due to the absence of a common vocabulary. This lack of a shared context can make expressing important concepts and discussing results difficult. This problem may be further exacerbated when communicating to an audience of laypeople. Without a clear frame of reference, simple concepts are often rendered difficult-to-understand at best, and unintelligible at worst. An easy way to alleviate this confusion is with the use of clear, well-designed visualizations to illustrate an idea, process or conclusion. There exist a number of well-described machine-learning and statistical techniques which can be used to illuminate the information present within complex high-dimensional datasets. Once the information has been separated from the data, clear communication becomes a matter of selecting an appropriate visualization. Ideally, the visualization is information-rich but data-scarce. Anything from a simple bar chart, to a line chart with confidence intervals, to an animated set of 3D point-clouds can be used to render a complex idea as an easily understood image. Several case studies will be presented in this work. In the first study, we will examine how a complex statistical analysis was applied to a high-dimensional dataset, and how the results were succinctly communicated to an audience of microbiologists and chemical engineers. Next, we will examine a technique used to illustrate the concept of the singular value decomposition, as used in the field of computer vision, to a lay audience of undergraduate students from mixed majors. We will then examine a case where a simple animated line plot was used to communicate an approach to signal decomposition, and will finish with a discussion of the tools available to create these visualizations.

  1. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  2. ADAPT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, John; Jankovsky, Zachary; Metzroth, Kyle G

    2018-04-04

    The purpose of the ADAPT code is to generate Dynamic Event Trees (DET) using a user specified set of simulators. ADAPT can utilize any simulation tool which meets a minimal set of requirements. ADAPT is based on the concept of DET which uses explicit modeling of the deterministic dynamic processes that take place during a nuclear reactor plant system (or other complex system) evolution along with stochastic modeling. When DET are used to model various aspects of Probabilistic Risk Assessment (PRA), all accident progression scenarios starting from an initiating event are considered simultaneously. The DET branching occurs at user specifiedmore » times and/or when an action is required by the system and/or the operator. These outcomes then decide how the dynamic system variables will evolve in time for each DET branch. Since two different outcomes at a DET branching may lead to completely different paths for system evolution, the next branching for these paths may occur not only at separate times, but can be based on different branching criteria. The computational infrastructure allows for flexibility in ADAPT to link with different system simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination), analysis of results, and user friendly graphical capabilities. The ADAPT system is designed for a distributed computing environment; the scheduler can track multiple concurrent branches simultaneously. The scheduler is modularized so that the DET branching strategy can be modified (e.g. biasing towards the worst-case scenario/event). Independent database systems store data from the simulation tasks and the DET structure so that the event tree can be constructed and analyzed later. ADAPT is provided with a user-friendly client which can easily sort through and display the results of an experiment, precluding the need for the user to manually inspect individual simulator runs.« less

  3. Systems tunnel linear shaped charge lightning strike

    NASA Technical Reports Server (NTRS)

    Cook, M.

    1989-01-01

    Simulated lightning strike testing of the systems tunnel linear shaped charge (LSC) was performed at the Thiokol Lightning Test Complex in Wendover, Utah, on 23 Jun. 1989. The test article consisted of a 160-in. section of the LSC enclosed within a section of the systems tunnel. The systems tunnel was bonded to a section of a solid rocket motor case. All test article components were full scale. The systems tunnel cover of the test article was subjected to three discharges (each discharge was over a different grounding strap) from the high-current generator. The LSC did not detonate. All three grounding straps debonded and violently struck the LSC through the openings in the systems tunnel floor plates. The LSC copper surface was discolored around the areas of grounding strap impact, and arcing occurred at the LSC clamps and LSC ends. This test verified that the present flight configuration of the redesigned solid rocket motor systems tunnel, when subjected to simulated lightning strikes with peak current levels within 71 percent of the worst-case lightning strike condition of NSTS-07636, is adequate to prevent LSC ignition. It is therefore recommended that the design remain unchanged.

  4. Dynamic Smagorinsky model on anisotropic grids

    NASA Technical Reports Server (NTRS)

    Scotti, A.; Meneveau, C.; Fatica, M.

    1996-01-01

    Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.

  5. Selected Parametric Effects on Materials Flammability Limits

    NASA Technical Reports Server (NTRS)

    Hirsch, David B.; Juarez, Alfredo; Peyton, Gary J.; Harper, Susana A.; Olson, Sandra L.

    2011-01-01

    NASA-STD-(I)-6001B Test 1 is currently used to evaluate the flammability of materials intended for use in habitable environments of U.S. spacecraft. The method is a pass/fail upward flame propagation test conducted in the worst case configuration, which is defined as a combination of a material s thickness, test pressure, oxygen concentration, and temperature that make the material most flammable. Although simple parametric effects may be intuitive (such as increasing oxygen concentrations resulting in increased flammability), combinations of multi-parameter effects could be more complex. In addition, there are a variety of material configurations used in spacecraft. Such configurations could include, for example, exposed free edges where fire propagation may be different when compared to configurations commonly employed in standard testing. Studies involving combined oxygen concentration, pressure, and temperature on flammability limits have been conducted and are summarized in this paper. Additional effects on flammability limits of a material s thickness, mode of ignition, burn-length criteria, and exposed edges are presented. The information obtained will allow proper selection of ground flammability test conditions, support further studies comparing flammability in 1-g with microgravity and reduced gravity environments, and contribute to persuasive scientific cases for rigorous space system fire risk assessments.

  6. An adaptive Kalman filter approach for cardiorespiratory signal extraction and fusion of non-contacting sensors

    PubMed Central

    2014-01-01

    Background Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. Methods We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Results Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min −1 (0.3 min −1) and -0.7 bpm (1.7 bpm) (compared to -0.2 min −1 (0.4 min −1) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average total computational time needed for the Kalman filters is under 25% of the total signal length rendering it possible to perform the filtering in real-time. Conclusions It is possible to measure in real-time heart and breathing rates using an adaptive Kalman filter approach. Adapting the Kalman filter matrices improves the estimation results and makes the filter universally deployable when measuring cardiorespiratory signals. PMID:24886253

  7. An adaptive Kalman filter approach for cardiorespiratory signal extraction and fusion of non-contacting sensors.

    PubMed

    Foussier, Jerome; Teichmann, Daniel; Jia, Jing; Misgeld, Berno; Leonhardt, Steffen

    2014-05-09

    Extracting cardiorespiratory signals from non-invasive and non-contacting sensor arrangements, i.e. magnetic induction sensors, is a challenging task. The respiratory and cardiac signals are mixed on top of a large and time-varying offset and are likely to be disturbed by measurement noise. Basic filtering techniques fail to extract relevant information for monitoring purposes. We present a real-time filtering system based on an adaptive Kalman filter approach that separates signal offsets, respiratory and heart signals from three different sensor channels. It continuously estimates respiration and heart rates, which are fed back into the system model to enhance performance. Sensor and system noise covariance matrices are automatically adapted to the aimed application, thus improving the signal separation capabilities. We apply the filtering to two different subjects with different heart rates and sensor properties and compare the results to the non-adaptive version of the same Kalman filter. Also, the performance, depending on the initialization of the filters, is analyzed using three different configurations ranging from best to worst case. Extracted data are compared with reference heart rates derived from a standard pulse-photoplethysmographic sensor and respiration rates from a flowmeter. In the worst case for one of the subjects the adaptive filter obtains mean errors (standard deviations) of -0.2 min(-1) (0.3 min(-1)) and -0.7 bpm (1.7 bpm) (compared to -0.2 min(-1) (0.4 min(-1)) and 42.0 bpm (6.1 bpm) for the non-adaptive filter) for respiration and heart rate, respectively. In bad conditions the heart rate is only correctly measurable when the Kalman matrices are adapted to the target sensor signals. Also, the reduced mean error between the extracted offset and the raw sensor signal shows that adapting the Kalman filter continuously improves the ability to separate the desired signals from the raw sensor data. The average total computational time needed for the Kalman filters is under 25% of the total signal length rendering it possible to perform the filtering in real-time. It is possible to measure in real-time heart and breathing rates using an adaptive Kalman filter approach. Adapting the Kalman filter matrices improves the estimation results and makes the filter universally deployable when measuring cardiorespiratory signals.

  8. The lionfish Pterois sp. invasion: Has the worst-case scenario come to pass?

    PubMed

    Côté, I M; Smith, N S

    2018-03-01

    This review revisits the traits thought to have contributed to the success of Indo-Pacific lionfish Pterois sp. as an invader in the western Atlantic Ocean and the worst-case scenario about their potential ecological effects in light of the more than 150 studies conducted in the past 5 years. Fast somatic growth, resistance to parasites, effective anti-predator defences and an ability to circumvent predator recognition mechanisms by prey have probably contributed to rapid population increases of lionfish in the invaded range. However, evidence that lionfish are strong competitors is still ambiguous, in part because demonstrating competition is challenging. Geographic spread has likely been facilitated by the remarkable capacity of lionfish for prolonged fasting in combination with other broad physiological tolerances. Lionfish have had a large detrimental effect on native reef-fish populations in the northern part of the invaded range, but similar effects have yet to be seen in the southern Caribbean. Most other envisaged direct and indirect consequences of lionfish predation and competition, even those that might have been expected to occur rapidly, such as shifts in benthic composition, have yet to be realized. Lionfish populations in some of the first areas invaded have started to decline, perhaps as a result of resource depletion or ongoing fishing and culling, so there is hope that these areas have already experienced the worst of the invasion. In closing, we place lionfish in a broader context and argue that it can serve as a new model to test some fundamental questions in invasion ecology. © 2018 The Fisheries Society of the British Isles.

  9. A Systematic Review Comparing the Acceptability, Validity and Concordance of Discrete Choice Experiments and Best-Worst Scaling for Eliciting Preferences in Healthcare.

    PubMed

    Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia

    2018-06-01

    The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.

  10. Health Inequalities: Trends, Progress, and Policy

    PubMed Central

    Bleich, Sara N.; Jarlenski, Marian P.; Bell, Caryn N.; LaVeist, Thomas A.

    2013-01-01

    Health inequalities, which have been well documented for decades, have more recently become policy targets in developed countries. This review describes time trends in health inequalities (by sex, race/ethnicity, and socioeconomic status), commitments to reduce health inequalities, and progress made to eliminate health inequalities in the United States, United Kingdom, and other OECD countries. Time-trend data in the United States indicate a narrowing of the gap between the best- and worst-off groups in some health indicators, such as life expectancy, but a widening of the gap in others, such as diabetes prevalence. Similarly, time-trend data in the United Kingdom indicate a narrowing of the gap between the best- and worst-off groups in some indicators, such as hypertension prevalence, whereas the gap between social classes has increased for life expectancy. More research and better methods are needed to measure precisely the relationships between stated policy goals and observed trends in health inequalities. PMID:22224876

  11. A robust impact assessment that informs actionable climate change adaptation: future sunburn browning risk in apple

    NASA Astrophysics Data System (ADS)

    Webb, Leanne; Darbyshire, Rebecca; Erwin, Tim; Goodwin, Ian

    2017-05-01

    Climate change impact assessments are predominantly undertaken for the purpose of informing future adaptation decisions. Often, the complexity of the methodology hinders the actionable outcomes. The approach used here illustrates the importance of considering uncertainty in future climate projections, at the same time providing robust and simple to interpret information for decision-makers. By quantifying current and future exposure of Royal Gala apple to damaging temperature extremes across ten important pome fruit-growing locations in Australia, differences in impact to ripening fruit are highlighted, with, by the end of the twenty-first century, some locations maintaining no sunburn browning risk, while others potentially experiencing the risk for the majority of the January ripening period. Installation of over-tree netting can reduce the impact of sunburn browning. The benefits from employing this management option varied across the ten study locations. The two approaches explored to assist decision-makers assess this information (a) using sunburn browning risk analogues and (b) through identifying hypothetical sunburn browning risk thresholds, resulted in varying recommendations for introducing over-tree netting. These recommendations were location and future time period dependent with some sites showing no benefit for sunburn protection from nets even by the end of the twenty-first century and others already deriving benefits from employing this adaptation option. Potential best and worst cases of sunburn browning risk and its potential reduction through introduction of over-tree nets were explored. The range of results presented highlights the importance of addressing uncertainty in climate projections that result from different global climate models and possible future emission pathways.

  12. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    NASA Astrophysics Data System (ADS)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  13. A robust impact assessment that informs actionable climate change adaptation: future sunburn browning risk in apple.

    PubMed

    Webb, Leanne; Darbyshire, Rebecca; Erwin, Tim; Goodwin, Ian

    2017-05-01

    Climate change impact assessments are predominantly undertaken for the purpose of informing future adaptation decisions. Often, the complexity of the methodology hinders the actionable outcomes. The approach used here illustrates the importance of considering uncertainty in future climate projections, at the same time providing robust and simple to interpret information for decision-makers. By quantifying current and future exposure of Royal Gala apple to damaging temperature extremes across ten important pome fruit-growing locations in Australia, differences in impact to ripening fruit are highlighted, with, by the end of the twenty-first century, some locations maintaining no sunburn browning risk, while others potentially experiencing the risk for the majority of the January ripening period. Installation of over-tree netting can reduce the impact of sunburn browning. The benefits from employing this management option varied across the ten study locations. The two approaches explored to assist decision-makers assess this information (a) using sunburn browning risk analogues and (b) through identifying hypothetical sunburn browning risk thresholds, resulted in varying recommendations for introducing over-tree netting. These recommendations were location and future time period dependent with some sites showing no benefit for sunburn protection from nets even by the end of the twenty-first century and others already deriving benefits from employing this adaptation option. Potential best and worst cases of sunburn browning risk and its potential reduction through introduction of over-tree nets were explored. The range of results presented highlights the importance of addressing uncertainty in climate projections that result from different global climate models and possible future emission pathways.

  14. The Best and the Worst of Times for Evolutionary Biology.

    ERIC Educational Resources Information Center

    Avise, John C.

    2003-01-01

    Discusses opportunities and challenges for the field of evolutionary biology, particularly in areas related to molecular genetic technologies, the environment, biodiversity, and public education. (Author/KHR)

  15. RETRAN analysis of multiple steam generator blow down caused by an auxiliary feedwater steam-line break

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.

    1987-01-01

    Analysis results for multiple steam generator blow down caused by an auxiliary feedwater steam-line break performed with the RETRAN-02 MOD 003 computer code are presented to demonstrate the capabilities of the RETRAN code to predict system transient response for verifying changes in operational procedures and supporting plant equipment modifications. A typical four-loop Westinghouse pressurized water reactor was modeled using best-estimate versus worst case licensing assumptions. This paper presents analyses performed to evaluate the necessity of implementing an auxiliary feedwater steam-line isolation modification. RETRAN transient analysis can be used to determine core cooling capability response, departure from nucleate boiling ratio (DNBR)more » status, and reactor trip signal actuation times.« less

  16. A Fabry-Pérot electro-optic sensing system using a drive-current-tuned wavelength laser diode.

    PubMed

    Kuo, Wen-Kai; Wu, Pei-Yu; Lee, Chang-Ching

    2010-05-01

    A Fabry-Pérot enhanced electro-optic sensing system that utilizes a drive-current-tuned wavelength laser diode is presented. An electro-optic prober made of LiNbO(3) crystal with an asymmetric Fabry-Pérot cavity is used in this system. To lock the wavelength of the laser diode at resonant condition, a closed-loop power control scheme is proposed. Experiment results show that the system can keep the electro-optic prober at high sensitivity for a long working time when the closed-loop control function is on. If this function is off, the sensitivity may be fluctuated and only one-third of the best level in the worst case.

  17. International ultraviolet explorer solar array power degradation

    NASA Technical Reports Server (NTRS)

    Day, J. H., Jr.

    1983-01-01

    The characteristic electrical performance of each International Ultraviolet Explorer (IUE) solar array panel is evaluated as a function of several prevailing variables (namely, solar illumination, array temperature and solar cell radiation damage). Based on degradation in the current-voltage characteristics of the array due to solar cell damage accumulated over time by space charged particle radiations, the available IUE solar array power is determined for life goals up to 10 years. Best and worst case calculations are normalized to actual IUE flight data (available solar array power versus observatory position) to accurately predict the future IUE solar array output. It is shown that the IUE solar array can continue to produce more power than is required at most observatory positions for at least 5 more years.

  18. 78 FR 53494 - Dam Safety Modifications at Cherokee, Fort Loudoun, Tellico, and Watts Bar Dams

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... fundamental part of this mission was the construction and operation of an integrated system of dams and... by the Federal Emergency Management Agency, TVA prepares for the worst case flooding event in order... appropriate best management practices during all phases of construction and maintenance associated with the...

  19. Task 1, Design Analysis Report: Pulsed plasma solid propellant microthruster for the synchronous meteorological satellite

    NASA Technical Reports Server (NTRS)

    Guman, W. J. (Editor)

    1971-01-01

    Thermal vacuum design supporting thruster tests indicate no problems under the worst case conditions of sink temperature and spin rate. The reliability of the system was calculated to be 0.92 for a five-year mission. Minus the main energy storage capacitor it is 0.98.

  20. 40 CFR 300.320 - General pattern of response.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...., substantial threat to the public health or welfare of the United States, worst case discharge) of the... private party efforts, and where the discharge does not pose a substantial threat to the public health or... 40 Protection of Environment 27 2010-07-01 2010-07-01 false General pattern of response. 300.320...

Top